The Buzzfeed piece about Twitter abuse that makes the rounds since last Thursday proves to be a very interesting read. The way the abuse problem has been left to fester is infuriating. So much so that while reading I took notes. Notes laced with profanity. Here are a few thoughts.
Free speech radicalism is an easy extremist tenet to hold in many ways. First, it is often defended by people who don’t know abuse at all. They, therefore, don’t have to make any sacrifices for this radical belief of theirs. Second, it is — in theory — a steadfast policy that protects the company from liabilities. They can then say that they’re a utility and don’t make content decisions.
It stems, however, from a weird idea of free speech. Free speech is great. I wouldn’t want the government to silence me but I want to be held accountable for the shit I say. Free speech radicals seem to have another definition. To many of them, free speech as being allowed to say whatever you want, often without suffering any consequences. Allowing people to be protected from the consequences of shitty actions and shitty words is not a moral imperative. It creates a toxic environment where a few assholes can police the speech of all the others by unleashing barrages of abuse and threats. It doesn’t help foster more productive debates. Just the opposite.
Yet, once people accept something needs to be done, the search for the ‘perfect solution’ begins… This search lead to paralysis as Vivian Schiller is reported as saying in the piece. Extremists always ask for a perfect solution before letting go of their own problematic one. Always seeking to swap an extremism for another. But that’s not how the social space works, that’s not how humans function and communicate. There needs to be moderation in every sense of the word. We need kind and intelligent judgment calls and concessions. There needs to be consistency obviously but no solution will ever be perfect.
Jack Dorsey is quoted as saying “No employee should ever be in the position of having to decide, subjectively, what qualifies as free speech and what does not”. This makes me doubtful that this problem will ever be mitigated. It will always come down to human judgment whether the judgment of a moderator or the judgment of an engineer designing an algorithm. Stress cases will always arise where the meaning of free speech will need to be discussed. Putting the burden completely on the users to moderate is again non-committal safe in the sense that investors might not punish the company and it won’t unleash lawsuits but it won’t fix the problem that for a vast majority of users, being on Twitter is very tiring work, an energy drain and often even a safety concern.
Large organizations all have things they’d rather not discuss (*cough* web governance *cough*), power struggles they’d rather not address, ambiguities that are preserved even if they hurt the business because it is believed that somehow these discussions would never end and distract everyone. I firmly believe leaders should encourage these discussions nonetheless. Especially in this case.