> You either are unaware of the meaning of the word "disingenuous", or you know my own intentions better than I do.
I'm simply crediting you with the intelligence and experience to understand that what someone says publicly is not always in line with their actual goal. Therefore, by pretending that Parler's terms of service represent their actual intentions despite evidence to the contrary, I believe you are being disingenuous.
> Did Parler express suprise that some of its users attempted to (and in some cases succeeded) incite violence on their platform?
I have no idea. It doesn't matter. "I didn't think the leopards would eat my face!" is not a credible expression of surprise when you invite a bunch of leopards into your home and set them loose.
> Again your tendency towards superlative undermines the discussion, but "everybody knows it" and "the damage has been done"?
I don't think it's undermining the discussion to assume a certain level of conversational and contextual shorthand. "Everybody" does not mean literally every person on earth, it means "people with interest and experience in these matters". I apologise if English is your second language or similar - I'll try to be clearer in future.
> This is a very strong statement indeed, claiming that you have knowledge that Parler's moderation has been so ineffectual that every user on their platform is able to view all inciting content before it is taken down.
Not every user needs to have viewed content for that content to be damaging. However, the more people that see damaging content the more damaging it is. Most social media platforms expose more recent content to more users, so damaging content will do most of the potential damage within a short time. Therefore, platforms with actual intent to reduce damage will need to remove problematic users from that platform while also employing a highly effective moderation team to identify new damaging content as quickly as possible.
It stands to reason that a platform that only wanted to look like they were reducing damage could employ an ineffectual moderation team to remove content only after the majority of the damage was done. I suggest that's what happened here - it seems clear that large amounts of inciting content were available for long periods of time (hours/days).
I'm simply crediting you with the intelligence and experience to understand that what someone says publicly is not always in line with their actual goal. Therefore, by pretending that Parler's terms of service represent their actual intentions despite evidence to the contrary, I believe you are being disingenuous.
> Did Parler express suprise that some of its users attempted to (and in some cases succeeded) incite violence on their platform?
I have no idea. It doesn't matter. "I didn't think the leopards would eat my face!" is not a credible expression of surprise when you invite a bunch of leopards into your home and set them loose.
> Again your tendency towards superlative undermines the discussion, but "everybody knows it" and "the damage has been done"?
I don't think it's undermining the discussion to assume a certain level of conversational and contextual shorthand. "Everybody" does not mean literally every person on earth, it means "people with interest and experience in these matters". I apologise if English is your second language or similar - I'll try to be clearer in future.
> This is a very strong statement indeed, claiming that you have knowledge that Parler's moderation has been so ineffectual that every user on their platform is able to view all inciting content before it is taken down.
Not every user needs to have viewed content for that content to be damaging. However, the more people that see damaging content the more damaging it is. Most social media platforms expose more recent content to more users, so damaging content will do most of the potential damage within a short time. Therefore, platforms with actual intent to reduce damage will need to remove problematic users from that platform while also employing a highly effective moderation team to identify new damaging content as quickly as possible.
It stands to reason that a platform that only wanted to look like they were reducing damage could employ an ineffectual moderation team to remove content only after the majority of the damage was done. I suggest that's what happened here - it seems clear that large amounts of inciting content were available for long periods of time (hours/days).