- Tension: Platforms wield publisher-scale influence over public truth while legally shielding themselves from publisher-level accountability.
- Noise: Debates about AI manipulation distract from the older, simpler problem: platforms profit from viral falsehoods regardless of format.
- Direct Message: Every narrow “deepfake ban” is a press release; real accountability requires platforms to own what they amplify.
To learn more about our editorial approach, explore The Direct Message methodology.
When Facebook announced in early 2020 that it would ban deepfakes, the headlines were largely positive. Here, finally, was one of the world’s most powerful platforms taking a stand against AI-manipulated video. The announcement landed at a moment of peak anxiety about synthetic media. Researchers had been warning for years that convincing fake video could destabilize elections, defame individuals, and erode the shared reality that democratic discourse depends on. Facebook’s move felt like a response to the moment.
It was not. Five years on, with AI video generation now a commodity tool available to anyone with a phone and a grudge, the policy Facebook announced in 2020 reads less like a safeguard and more like a case study in how platforms use narrow, technical rules to perform accountability without practicing it.
The lesson from that episode extends well beyond deepfakes. It goes to the heart of a question still unresolved in 2026: when a platform reaches the scale of a public utility, who is responsible for what travels across it?
The gap between claiming neutrality and exercising power
Facebook’s 2020 deepfake policy had two requirements that had to be met simultaneously before content would be removed. The video had to be deceptive in a way the average person would not detect, and it had to have been produced specifically through artificial intelligence or machine learning. Both conditions. At the same time.
The practical effect was a policy riddled with deliberate exits. A slowed-down video of then-Speaker Nancy Pelosi, edited to make her appear to slur her speech, had already gone viral weeks before the announcement. It was manipulative, it was deceptive, and it was excluded from the new rules because no neural network was involved. A Republican congressman had recently tweeted a photoshopped image falsely showing President Obama shaking hands with the Iranian president. Also excluded: not a video, not AI-generated. Written misinformation, fabricated quotes, selectively edited audio — all outside the scope of the new policy.
This is the tension that 2020 exposed and that 2026 has only deepened. Platforms like Meta, and the broader ecosystem of social media companies that followed similar patterns, are not neutral conduits. Their algorithms actively select what content spreads and what content stagnates. They employ thousands of people to make decisions about what is and is not permissible. They shape public perception at a scale no traditional publisher has ever achieved. And yet the legal and rhetorical framework they rely on is built on the claim that they are mere infrastructure — pipes, not publishers.
That contradiction was uncomfortable in 2020. Today it is untenable. Pew Research data consistently shows that a majority of American adults get at least some news from social media. The platforms are not neutral terrain on which the information ecosystem plays out. They are, in large part, the information ecosystem.
What the deepfake debate kept us from seeing
The specific drama around AI-generated video was, in retrospect, a distraction from a more durable problem. The deepfake panic of the late 2010s created an implicit framework in which manipulated content was the threat, and authentic content was the baseline. If we could just label or remove the fake stuff, the reasoning went, the real stuff would be fine.
That framework never held up. Some of the most damaging misinformation of the past decade has been entirely authentic in format. Real quotes stripped of context. Real images attached to false captions. Real events described with fabricated details. The tools for this kind of manipulation require no AI at all, only motivation and a large enough following to seed the initial spread.
A recent case in Slovakia illustrates how these risks are no longer hypothetical. In 2023, a deepfake audio clip falsely portraying journalist Monika Tódová discussing election fraud circulated just days before parliamentary elections, feeding into broader disinformation narratives. The content spread widely despite its falsity, and authorities initially declined to investigate, arguing the public would not be misled. Only months later was the case reopened, treated under defamation law due to the absence of specific legislation addressing deepfakes. The episode underscores a broader gap: even when synthetic media is clearly weaponized against journalists and democratic processes, both platforms and legal systems remain structurally unprepared to respond.
Facebook’s own internal research, portions of which became public through the Wall Street Journal’s Facebook Files reporting in 2021, indicated that the platform’s engineers had identified its algorithm as a driver of anger and divisiveness, and that leadership had repeatedly declined to act on those findings in ways that might reduce engagement. The deepfake policy was announced with fanfare. The algorithmic problem was managed quietly, or not at all.
In the years since, the content moderation conversation has shifted considerably. The EU’s Digital Services Act, which came into full effect in 2024, established binding obligations for large platforms around risk assessment, transparency, and the handling of illegal content. It does not draw the same narrow lines Facebook drew in 2020. It asks whether systems, taken as a whole, create foreseeable harms, and it places the burden of demonstrating otherwise on the platforms themselves. That is a meaningfully different question than whether a specific piece of content was generated by a machine learning model.
Meanwhile, the AI video problem Facebook’s 2020 policy was nominally designed to address has grown by orders of magnitude. Generative video tools that required specialized expertise and significant compute in 2020 are now embedded in consumer applications. The volume of synthetic media circulating on major platforms has increased dramatically, and the platforms’ enforcement of their own content moderation policies around it has been described by independent researchers as inconsistent at best.
What we owe the information we share
The question was never whether AI-generated video was dangerous. The question was whether a platform that profits from viral content can be trusted to police it — and the answer, consistently, has been no.
The 2020 deepfake policy revealed something important precisely because it was so narrow. Platforms do not draw narrow lines accidentally. A policy that covers only AI-generated video, only when both deceptive and machine-made, is a policy designed to cover as little as possible while generating the maximum amount of positive press. It is a pattern that has repeated across content moderation decisions for years: announce a rule, define it carefully enough that most violating content falls outside it, and let the headlines do the work.
What Facebook’s 2020 announcement could not acknowledge, and what platforms have resisted acknowledging ever since, is the publisher problem. Every major news organization, from the New York Times to a local television affiliate, operates under the understanding that it is responsible for what it publishes. Not perfectly, not without error, but accountably. When the Times gets something wrong, there is a corrections process. When a journalist fabricates a story, there are consequences. The institution owns what it puts in front of readers.
Platforms have spent enormous resources arguing they are categorically different. Section 230 of the Communications Decency Act, the legal provision that shields them from liability for user-generated content, has been the cornerstone of that argument in the United States. The argument has real merit in the context it was designed for — a bulletin board hosting user posts should not be treated the same as a newspaper editorial board. But the bulletin board framing does not describe what major social platforms have become. An algorithm that decides, in real time, which content to amplify to which users is making editorial decisions, even if no human editor reviewed the specific post.
This is the adjustment in perspective that still needs to happen. Labeling deepfakes, or banning them, or watermarking AI-generated content addresses one symptom of a platform ecosystem that has not resolved its fundamental accountability question. The platforms that shape what hundreds of millions of people see and believe are not infrastructure. They are, by every meaningful definition, publishers. The conversation about what they owe the public will be more productive once that is the starting assumption rather than the conclusion we keep arguing toward.