Why moderation at scale doesn’t work

Writing about PayPal reminded me of that time Archie McPhee “We Make Weird” – currently selling a plastic “Squirrel In Underpants Nodder” – was unable to sell “Tardigrade Ornaments” online. Tardigrade as in Waterbears, those adorable microscopic creatures that live everywhere, are so small they are impossible to see and can survive basically anything.

So, did PayPal block the sale of Tardigrade Ornaments because they believed they were real and they wanted to stop this cruel mistreatment of microorganisms?

No, they blocked it because the American government has enacted sanctions against a Balkan arms dealer working out of Cyprus, trading under the name “Tardigrade Limited”. So, PayPal blocked all transactions related to “Tardigrade”.

Now, this is all fun and all, but it does demonstrate that moderation at scale is impossible. You can’t have people reviewing every transaction so you set an algorithm to do it. And an algorithm will fail.

Like Twitter’s automatic filter that caught a Dutch admirer of Bernie Sanders but unfortunately couldn’t tell that the “die” was Dutch and not English.

The account was suspended for 12 hours for writing “Topinfluencer, that Bernie ;-)” – in Dutch.

Or, as Mike Masnick put it in 2019:

I was thinking about that theory recently, in relation to the ever present discussion about content moderation. I’ve argued for years that while many people like to say that content moderation is difficult, that’s misleading. Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach.

And thus, throwing humility to the wind, I’d like to propose Masnick’s Impossibility Theorem, as a sort of play on Arrow’s Impossibility Theorem. Content moderation at scale is impossible to do well. More specifically, it will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone.
— Mike Masnick, “Masnick’s Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well

The problems are many:

  1. The actual scale we are talking about here. Facebook got 350 million pictures added every day in 2019 when Masnick wrote that article, so even a tiny fraction of mistakes will either let a large absolute number of pictures that should be stopped through or stop a large absolute number of pictures that shouldn’t have been.
  2. When media and politicians demand moderation, they will use any post that should have been stopped but wasn’t to force greater moderation. This will lead to over-moderation.
  3. Moderation depends on context. Yes, I know that many organisations and media now claim that context no longer matter, but it does; otherwise we are back to believing in magic and the power of incantations. I know some people believe this, but, frankly, they are wrong.

And the failures are also many, like Twitter locking an account for disinformation when the author fact-checked the former President of USA and the disinformation was in the quote, or Youtube blocking an American propaganda movie from WWII because it featured Hitler and Nazis, disregarding the fact that they were in the role of villains.

I’ll end with this from EFF’s “Content Moderation is Broken. Let Us Count the Ways.“:

No More Magical Thinking

We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable. As companies increasingly use artificial intelligence to flag or moderate content—another form of harm reduction, as it protects workers—we’re inevitably going to see more errors. And although the ability to appeal is an important measure of harm reduction, it’s not an adequate remedy.

Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system—or remake it. If we choose the latter, which we should, here are some preliminary recommendations:

  • Censorship must be rare and well-justified, particularly by tech giants. […]
  • Consistency. Companies should align their policies with human rights norms. […]
  • Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. […]
  • Evidence-based policymaking. Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. […]

Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.
— EFF, “Content Moderation is Broken. Let Us Count the Ways.

 

 

 

 

 

 

 

 

 

This entry was posted in Development and tagged , , , , , , , , , , . Bookmark the permalink.

1 Response to Why moderation at scale doesn’t work

  1. Pingback: It’s impossible, so it must be done quickly | Henning's blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.