A cure for Fake News?

After combating disinformation/”fake news” online for years, are we actually winning? It feels like we are losing.

I personally know a couple of people who deny the existence of COVID-19. I know some who believe many of the more outlandish lies about Barack Obama and Hillary Clinton, but absolutely believe Vladimir Putin has their best interest at heart.

And while we enact laws to limit disinformation, their most immediate effect is to limit free speech (see: NetzDG) and stories about people dying after getting their vaccine jabs are spread by traditional media*.

We’re living through a pandemic of misinformation, and as a consequence many minds have fallen sick with conspiracy theories. Recently we’ve seen wildfires blamed on space lasers, COVID blamed on Bill Gates […]

[…] the tech giants’ efforts are doomed to fail. Not only does the policing of conspiracy theories do nothing to stop their spread, it actually spreads them further. In fact, the most effective way to fight fake news is to do the very opposite of what is being done, and to simply let conspiracy theories run rampant.

The record of censoring harmful ideas speaks for itself. Here in the UK, police and tech companies have been working for 20 years to suppress the online spread of two problematic worldviews: jihadism and neo-Nazism. The result has been that jihadism remains the largest terrorist threat and far-right extremism is now the fastest growing threat in the country.


A mind unaccustomed to deceit is the easiest to deceive. You don’t stop people believing lies by making them dependent on others to decide for them what is true.
— Gurwinder Bhogal, “The Best Cure for Fake News is Fake News

*) Technically true, as, just as with the HPV vaccine, someone was involved in a fatal traffic accident shortly after getting the jab. But any media that prints a headline that reads “man died tragically hours after getting vaccine” has crossed the barrier into passive lying.









Posted in Other | Tagged , , , , , | Leave a comment

The Bookshop

Marty Feldman and John Cleese in the “At Last The 1948 Show”:



Posted in Other | Tagged , , | Leave a comment

So, is NetzDG good?

Following the endless saga of governments trying to restrict free speech, it is worth considering the German NetzDG law.

NetzDG, Netzwerkdurchsetzungsgesetz (I am not kidding), subtitled “Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken”, or the “Network Enforcement Act” for those that don’t speak German, aims at combating disinformation and hate speech on social networks.

A laudable goal, except of course for the fact that hate speech is most likely to be defined as much more than the true form (incitement of violence), such as saying actually factual things (see the Nelson example in the post linked above) that just happens to be against the current political climate.

But it’s actually a lot easier to get an idea of how good NetzDG is, than to examine the definitions of hate speech and whether or not this works to limit that and disinformation/”fake news”: You could just look at what regimes like this law. Scotland and to some degree the rest of United Kingdom as it stands today would like this kind of law, but the real fans are countries like … Turkey, Russia and Venezuela.

Countries famed for their relation to free speech.

The German Network Enforcement Act (NetzDG) continues to inspire authoritarian and illiberal internet censorship around the world. In under a year, the number of countries copy-pasting the NetzDG matrix to provide cover and legitimacy for digital censorship and repression has almost doubled to a total of 25, a new analysis from the civil liberties think tank Justitia´s Future of Free Speech project shows.
— Jacob Mchangama, “The Digital Berlin Wall Act 2: How the German Prototype for Online Censorship went Global – 2020 edition

And if countries like Honduras, Turkey, Egypt, Venezuela, Russia and Pakistan think your law limiting certain online discussions is good, that law is quite likely to be bad.












Posted in Other | Tagged , , , , , , , , , , | Leave a comment

It’s impossible, so it must be done quickly

Content moderation at scale is impossible, unless you want it to be random to a certain degree. This is not a surprise and yet politicians demand that tech companies do it anyway – like the German NetzDG law.

NetzDG demands that large online platforms delete “manifestly unlawful content” within 24 hours of receiving a complaint, and other unlawful content within 7 days. So, “manifestly” must be very easy to determine, then, and the rest of the unlawful content can just needs a few more days, apparently.

Well, except if you want to be correct:

Jacob Mchangama continues:

While recognizing the differences between national criminal law and procedure and private content moderation, it is relevant to assess how the time limits prescribed for private platforms by national governments compare to the length of domestic criminal proceedings in hate speech cases. Large discrepancies may suggest that very short notice and takedown time limits for private platforms result in systemic “collateral damage” to online freedom of expression, as determined by the French Constitutional Council and noted by David Kaye. Platforms may be incentivized to err on the side of removal rather than shielding the speech of their users against censorious governments. Platforms may respond by developing less speech-protective terms of service and more aggressive content moderation enforcement mechanisms that are geared toward limiting the risk of liability rather than providing voice to the users. Indeed, since the adoption of the NetzDG, platforms such as Facebook have expanded the definition of hate speech and dramatically increased the quantity of deleted content.
— Jacob Mchangama: “Rushing to Judgment: Examining Government Mandated Content Moderation

This isn’t even touching on the problem that “hate speech” is a new way for government to suppress opinions they don’t approve of. Now, hate speech does exist and in its true form is illegal in most Western countries: The encouragement of violence against others. Other than that, hate speech doesn’t actually exist:

The more I study net regulation, the more of a free-speech absolutist I become. To think that speech is harmful is almost inevitably a third-person effect: believing that everyone else — but not you — is vulnerable to bad words and ideas and that protecting them from it will cure their ignorance. There is but one cure for ignorance: education. The goal of education is to prepare the mind to wrestle with lies and hatred and idiocy … and win.
— Jeff Jarvis, “Speech is not harmful: A lesson to be relearned

The British politicians and authorities seem hell-bent on proven just how bad hate speech laws can be. From the police calling an old lady because she tweeted something that is politically incorrect, over the police claiming that just “being offensive” is an offense to the Scottish justice secretary proposing a law to prosecute anything deemed hateful, even when happening within the private homes of citizens.

It’s now very common to hear people say, ‘I’m rather offended by that.’ As if that gives them certain rights. It’s actually nothing more… than a whine. ‘I find that offensive.’ It has no meaning; it has no purpose; it has no reason to be respected as a phrase. ‘I am offended by that.’ Well, so fucking what.
— Stephen Fry

What is another word for governments suppressing opinions they don’t like?

I mean, if it’s a danger to your government that people write what they think, maybe the problem isn’t what people write.










Posted in Other | Tagged , , , , , , , , | Leave a comment

Long tail and Brexit

The long tail is an early victim of Brexit.
It used to easy to get niche products from the continent.
This fits well with May’s vision of an homogeneous Little England.
Richard Tol

The Long Tail is the concept of all the products that have small volumes but altogether can be a market as big as the most common products.

I hadn’t thought about Tol’s point before but it is obviously true. That’s one of the most immediate costs due to Brexit. Tol, of course, sees this from Britain’s side but it is equally true from ours; we’ll find it harder to buy niche products from Britain, so we’ll stop doing that.












Posted in Other | Tagged , , | Leave a comment


Let’s try this – let us imagine a year of 6 seasons, where today is the first day of Unlocking!

March and April are not spring. They’re Unlocking.

Unlocking, where nature slowly starts to awaken after a winter. Vonnegut’s original description of Winter was “Boy! Are they ever cold!” and I’ll say, this year that was spot on. Let’s see if we get a slow awakening of nature the next 2 months.

This is the last of my attempts at keeping Vonnegut’s 6 seasons in present memory. Vonnegut was a treasure and recently Robert B. Weide made a documentary about him:


















Posted in Other | Tagged , , | Leave a comment

Morrison’s distraction

Toni Morrison’s argument about the real function of racism – which applies to equally to sexism etc – is this:

The function, the very serious function of racism is distraction. It keeps you from doing your work. It keeps you explaining, over and over again, your reason for being. Somebody says you have no language and you spend twenty years proving that you do. Somebody says your head isn’t shaped properly so you have scientists working on the fact that it is. Somebody says you have no art, so you dredge that up. Somebody says you have no kingdoms, so dredge that up. None of this is necessary. There will always be one more thing.
— Toni Morrison

There’s a related argument, originally about anti-Semites but also relevant in many other cases:

Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.
— Jean-Paul Sartre

I’ve certainly experienced this when trying to argue against crazy conspiracy theories.

It is all distractions, it is all in bad faith.













Posted in Other | Tagged , , , | Leave a comment

Why moderation at scale doesn’t work

Writing about PayPal reminded me of that time Archie McPhee “We Make Weird” – currently selling a plastic “Squirrel In Underpants Nodder” – was unable to sell “Tardigrade Ornaments” online. Tardigrade as in Waterbears, those adorable microscopic creatures that live everywhere, are so small they are impossible to see and can survive basically anything.

So, did PayPal block the sale of Tardigrade Ornaments because they believed they were real and they wanted to stop this cruel mistreatment of microorganisms?

No, they blocked it because the American government has enacted sanctions against a Balkan arms dealer working out of Cyprus, trading under the name “Tardigrade Limited”. So, PayPal blocked all transactions related to “Tardigrade”.

Now, this is all fun and all, but it does demonstrate that moderation at scale is impossible. You can’t have people reviewing every transaction so you set an algorithm to do it. And an algorithm will fail.

Like Twitter’s automatic filter that caught a Dutch admirer of Bernie Sanders but unfortunately couldn’t tell that the “die” was Dutch and not English.

The account was suspended for 12 hours for writing “Topinfluencer, that Bernie ;-)” – in Dutch.

Or, as Mike Masnick put it in 2019:

I was thinking about that theory recently, in relation to the ever present discussion about content moderation. I’ve argued for years that while many people like to say that content moderation is difficult, that’s misleading. Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people — including many politicians and journalists — seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach.

And thus, throwing humility to the wind, I’d like to propose Masnick’s Impossibility Theorem, as a sort of play on Arrow’s Impossibility Theorem. Content moderation at scale is impossible to do well. More specifically, it will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone.
— Mike Masnick, “Masnick’s Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well

The problems are many:

  1. The actual scale we are talking about here. Facebook got 350 million pictures added every day in 2019 when Masnick wrote that article, so even a tiny fraction of mistakes will either let a large absolute number of pictures that should be stopped through or stop a large absolute number of pictures that shouldn’t have been.
  2. When media and politicians demand moderation, they will use any post that should have been stopped but wasn’t to force greater moderation. This will lead to over-moderation.
  3. Moderation depends on context. Yes, I know that many organisations and media now claim that context no longer matter, but it does; otherwise we are back to believing in magic and the power of incantations. I know some people believe this, but, frankly, they are wrong.

And the failures are also many, like Twitter locking an account for disinformation when the author fact-checked the former President of USA and the disinformation was in the quote, or Youtube blocking an American propaganda movie from WWII because it featured Hitler and Nazis, disregarding the fact that they were in the role of villains.

I’ll end with this from EFF’s “Content Moderation is Broken. Let Us Count the Ways.“:

No More Magical Thinking

We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable. As companies increasingly use artificial intelligence to flag or moderate content—another form of harm reduction, as it protects workers—we’re inevitably going to see more errors. And although the ability to appeal is an important measure of harm reduction, it’s not an adequate remedy.

Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system—or remake it. If we choose the latter, which we should, here are some preliminary recommendations:

  • Censorship must be rare and well-justified, particularly by tech giants. […]
  • Consistency. Companies should align their policies with human rights norms. […]
  • Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. […]
  • Evidence-based policymaking. Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. […]

Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.
— EFF, “Content Moderation is Broken. Let Us Count the Ways.










Posted in Development | Tagged , , , , , , , , , , | 1 Comment

Closing your PayPal account

You really shouldn’t lose your password for your PayPal account.

I’ve spent 27 minutes on call with a customer service agent from PayPal, trying to log in to my PayPal account to finally, after years and years of trying, to delete my account.

Unfortunately, despite them having my email address, my complete and full legal name, my complete actual physical address and my actual phone number, which is still the same as when I registered my account, they are unable to give me a new password.

The fact that I’ve tried to log on, unsuccessfully, every couple of years since 2016 and all I want to do is close my account, did not help.

But, there is light at the end of the tunnel: When finally the agent had to admit defeat she told me it really wasn’t that big of a problem, because PayPal had just changed their policy last quarter, so if I didn’t log on to the account within the next 12 months I’d get a warning that my account would be closed every few months from that. And then, in about another year, my account would actually be closed.

So patience is expected to be rewarded.

Also, don’t forget your password.












Posted in Development | Tagged | 1 Comment

Can anyone spot the missing category?

Not mine, but someone else received this not-very-nice message from Zoom:

Their “Community Standards” page reads, in part,

We believe that hateful conduct is conduct that promotes violence against or directly attacks or threatens other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We will not allow access to Zoom to those who have used or intend to use it for the purpose of inciting harm towards others on the basis of these categories.

Now, what is the category that’s missing? It’s a category that most Western countries have made laws to protect against discrimination.


So, like Twitter, Facebook, WordPress, Reddit and Anchor*, it is not “hateful conduct” to promote violence against, or directly attack or threaten other people on the basis of their (biological) sex.

And for some totally unfathomable reason, it always seems to be directed at women.

*) This popped up in my feeds just as WordPress and Anchor (part of Spotify) announced their blog-to-podcast service:

Which might have been tempting, if not Anchor just a few days prior to that tipped their hand and showed everyone that it would be a really bad idea to build your brand on their platform.

A feminist podcast had done that, but one man – just one – with a questionable obsession with a certain type of media, complained and Anchor removed the podcast.

As a friend of mine wrote, maybe don’t build your brand on Anchor.

Likewise, maybe I was wrong to write this on WordPress, because they are not really to be trusted, either.













Posted in Other | Tagged , , , , , , , , , , , | Leave a comment