Mark, Musk, and the Freedom of Lie
This is an updated and translated column, published in The Moscow Times on January 10th, 2025
All major global media outlets reported on Mark Zuckerberg’s announcement about closing Meta’s Third-Party Fact-Checking Program in the U.S. However, few fully analyzed what this means, what users in other countries should expect, and the arguments Zuckerberg used to justify this decision. Since this change will affect hundreds of millions of users worldwide, it's worth expanding the context and addressing several questions.
What Happened
Mr. Zuckerberg announced the closure of the Third-Party Fact-Checking Program, which had been operating for nearly nine years in the U.S., and its replacement with Community Notes. However, Zuckerberg’s statement is based on deliberate falsehoods and misrepresenting facts. Let’s examine two of his claims about the fact-checking program.
The first claim. "A program intended to inform too often became a tool to censor."
This is a lie.
Any comparison between fact-checking and censorship is false. Censorship removes or destroys content, while fact-checking adds context, clarifies information, and archives data about reality. See the difference?
Regarding Meta’s third-party fact-checking program: fact-checkers never influenced (and in no Meta product do they affect) account restrictions, content removal, bans, or similar actions. They are not Meta employees and operate independently of the company’s internal rules and decisions. Fact-checkers write debunks on their websites and tag/flag viral misinformation on Facebook, Instagram, or Threads. That’s it.
I am unaware of any case where content marked as misinformation by fact-checkers was deleted, or where an author’s account was banned, temporarily or permanently, due to that specific publication. I doubt Meta itself is aware of such cases, as bans and content removal have always been fully under Meta’s control. Allowing fact-checkers to influence these actions would undermine the entire purpose of the Third-party Fact-checking Program, which is to provide context, not to delete it. (Well, it doesn’t mean there are no cases, but we do not know anything about it — there is no relevant data.) Meta itself boasted about the program’s incredible success in February 2024, as well in March 2021, Zuckerberg described it in Congress as an "industry-leading fact-checking program."
If you’ve ever been restricted or banned on Facebook (I’ve been banned 11 times in the past three years), it’s not because of fact-checkers but due to the platform’s internal algorithms. In recent years, Meta has delegated hate speech moderation to algorithms. What the algorithm considers hate speech remains a mystery. It might ignore direct insults or open racism while banning a Holocaust researcher because of his or her announcing a new article. Whether your appeal against a ban reaches a human moderator is also unclear. (It’s worth noting that fact-checkers never reviewed or labeled hate speech on Facebook; this falls outside their scope and methodology.)
The second one. "Experts, like everyone else, have their own biases and perspectives" (from Meta's text statement) and "Fact checkers have just been too politically biased and have destroyed more trust than they’ve created" (from Zuckerberg’s speech on video).
One statement misrepresents the facts, and the other is an outrageously bold lie.
Of course, fact-checkers have personal biases, values, and principles in their private lives. We are ordinary people: among us are feminists and homophobes, liberals and conservatives, people with pro-Israel or pro-Palestinian views. Yes, there are likely more liberals and left-leaning individuals than conservatives and more atheists than believers. But we adhere to professional standards, violations of which can lead to the project losing membership in professional guilds, and for the fact-checker, consequences range from reprimands to dismissal or even career changes, depending on the severity of the violation.
These standards are outlined in globally accepted, published, and transparent fact-checking rules. The rules include provisions that fact-checkers cannot be members of political parties, endorse any party or candidate in their work, remain anonymous, accept gifts from people whose statements they analyze, or fail to disclose conflicts of interest.
Compliance is monitored both internally and externally. Every fact-checking project in global or European networks undergoes annual accreditation by independent assessors (IFCN or EFCSN). This consistent and reproducible standard minimizes individual political biases.
Most importantly, for an editorial team to participate in the Third-Party Fact-Checking Program, it has to meet all these requirements.
However, Mark Zuckerberg's statement is phrased the way it is. In the public sphere, all blame is being pinned on the external fact-checking program, with only a brief mention of internal issues with algorithms and the changes that will affect Meta itself, as well as the restrictions in its products (not to mention the far more serious problems, such as drug and fraud advertising through Meta's products, which no one is in a hurry to address). The company's leadership, its own algorithms, and moderators' errors can all be blamed on the fact-checkers, allowing Meta to curry favor with the new authorities. Cheap and effective. Well done, Mark!
The leftist, or in academic circles — far-left, bias (as well as right bias—let’s not forget who won the elections) is indeed very strong in the U.S., but the fact-checkers have nothing to do with this. It's deeply frustrating that Meta chose to shut down the third-party fact-checking program in such a clumsy way — namely, by reinforcing the Trumpist narrative of the nonexistent "left-wing fact-checker censorship." There was no need to grovel so deeply before Elon Musk. At the very least, they could have refrained from lying.
What to Expect and How It Affects You (and When)
The great joy of American conspiracy theorists, the alt-right, right-wing populists, and the more restrained satisfaction of regular, moderate conservatives who, without understanding the issue, bought into the Trumpist anti-fact-checker narrative, is based on almost nothing. Facebook’s algorithms will continue to ban and restrict accounts under the platform’s policies and rules (maybe just some more space for sexist speech). At least Mark Zuckerberg is honest about this in his statement.
For Russian speakers and others who enjoy colorful language and dislike migrants, there’s no reason to celebrate at all. Everything related to the U.S. primarily concerns the U.S. Meanwhile, here we’ve long been living in a partly controlled post-apocalyptic anarchy involving neural networks, and no one really cares about you or our complaints about algorithms and shadow bans. The algorithm has been in charge and will continue to be for at least another year. This applies to everyone — both the reasonable and the unreasonable. I will likely end up in Facebook jail again because I’m not going to change how I express my thoughts, use letter substitutions, or, God forbid, write “seggs” instead of “sex”.
Will the program remain outside the U.S.? For now, yes. For how long? That’s unknown. It could end in a year, or — depending on what happens in the U.S. — it might even gain support, including public backing. But this is just speculation; we’ll see in a year.
Should we expect anything good?
Actually, yes. If the promise to hire more human moderators for potentially rule-violating Meta content, particularly hate speech, is fulfilled. Reducing the role of algorithms, in theory, should have an impact not only in the U.S. but globally. It might restore some normal balance in how posts are shown to friends. Perhaps, realizing its mistakes, Meta will stop throttling publishers’ reach (unlikely, but who knows). However, contrary to Mr. Zuckerberg's statement, none of this has anything to do with the presence or absence of a fact-checking program.
Will this lead to more fake news?
Most likely, no. It’s just that U.S. citizens (and later, quite possibly, everyone else) will no longer have an additional shield against false or distorted information in the form of fact-checking labels within Meta’s products. People will have to let any viral content into their minds at their own risk and either verify everything themselves, believe everything, or trust nothing at all.
What about Community Notes?
We don’t yet know how Community Notes will work in Meta’s products. But we can see how they function in X, and this is the model Zuckerberg referred to. It’s reasonable to assume Meta will implement something similar. Undoubtedly, this is better than nothing, especially given the vast and diverse community involved. However, this form of contextual expansion has significant drawbacks compared to professional fact-checking.
Fact-checkers have a method. It’s a reproducible method applied consistently across all checks. Adherence to this method is strictly monitored within editorial teams and guild frameworks. Even if part of the community learns the method, it’s unclear how (or if) adherence will be monitored. For instance, the method involves selecting content for review based on criteria like virality, significance, and verifiability through open data. What the community will check and how contributions will be evaluated is unknown. Moreover, thousands of popular fakes have circulated on Twitter (X) for years without any community annotations. It’s doubtful the Instagram or Facebook community will be more active. Drawbacks: lack of activity, lack of transparency. Upside: potential for self-congratulatory behavior.
Fact-checking is a profession. A paid profession. This means that your reputation and financial stability depend on how well-prepared you are and how effectively you do your job. A fact-checker cannot simply make a mistake, erase their identity, and start over. A social media account by a community member? That’s easy to reset. Less risk means less accountability.
Community Notes could complement fact-checkers’ work, engage the user community in fact-checking, and even support educational initiatives — which would be fantastic. But they cannot replace professional fact-checking. Not in any way.
P.S.
Setting aside the ethical aspects of combating misinformation and disinformation, this could be seen as a routine corporate decision. Meta is a commercial company and has every right to start or stop any program (after all, the Google Graveyard is filled with useful projects). Fact-checkers might have expressed disapproval, but nothing more. However, this decision is political, and that’s likely why Mark Zuckerberg doesn’t hesitate to lie and manipulate data, scapegoating fact-checkers.
If you’ve read this text to the end, it’s now a little harder for Mark to manipulate you personally.