Researchers found that X’s algorithm permanently shifts political opinions rightward — and turning it off doesn’t reverse the damage

Researchers found that X's algorithm permanently shifts political opinions rightward — and turning it off doesn't reverse the damage
Add DMNews to your Google News feed.
  • Tension: A new independent study published in Nature demonstrates that X’s default algorithmic feed isn’t politically neutral — it measurably shifts users’ opinions to the right, contradicting earlier research suggesting algorithms don’t change political attitudes.
  • Noise: The debate over algorithmic neutrality has been shaped by a single Meta-funded study suggesting algorithms don’t influence political beliefs. This new independent research on X challenges that conclusion directly and reveals the shifts are structurally irreversible.
  • Direct Message: X’s default feed is doing invisible political work on every user — introducing conservative accounts that permanently alter information environments even after the algorithm is turned off, making the platform’s architecture itself an editorial force with measurable ideological consequences.

To learn more about our editorial approach, explore The Direct Message methodology.

A seven-week experiment involving thousands of Americans just produced one of the most significant findings in platform accountability research to date: X’s algorithmic feed — the default setting for every user on the platform — measurably shifts political opinions to the right. And turning the algorithm off doesn’t undo the damage.

The study, published in the journal Nature, is the first to independently test the political effects of X’s recommendation algorithm without any cooperation from the platform itself. Conducted in summer 2023 with active U.S.-based X users, the research found that exposure to the algorithmic feed shifted participants’ political opinions in a conservative direction — with measurable effect sizes. Small by statistical convention. Not small at all when you multiply it by hundreds of millions of users.

social media algorithm politics
Photo by Pixabay on Pexels

“Feed algorithms decide what billions of people see on social media every day,” said research leadership at the Paris School of Economics. “Whether they also shape what people think is one of the most important open questions in the social sciences.”

What makes this study land differently than previous research is its independence — and its findings about irreversibility. A major prior study conducted in collaboration with Meta during the 2020 U.S. election found that turning off Facebook’s algorithm had no measurable effect on political attitudes. That result was widely cited as evidence that algorithm anxiety was overblown. This new research has now complicated that narrative considerably.

The mechanism matters here. According to the study’s findings, X’s algorithm doesn’t just surface conservative content more prominently — it actively buries posts from traditional news media outlets. The algorithmic feed promoted posts annotated as conservative while demoting journalistic sources, creating what researchers describe as a fundamentally altered information diet. Posts in the algorithmic feed received substantially more engagement than those in the chronological feed, amplifying the perception that conservative viewpoints represented mainstream consensus.

This is worth sitting with. The study included participants across the political spectrum. The algorithm shifted political opinions rightward across this mixed population — not by showing people content they were already seeking, but by reshaping what appeared in front of them.

“The main takeaway is that social media feed algorithms are not politically neutral,” the researchers told PsyPost. “In our experiment with U.S.-based X users in summer 2023, switching on X’s algorithmic feed shifted political opinions to the right.”

The irreversibility finding is the part that should alarm platform researchers and policymakers alike. When participants who had been exposed to the algorithmic feed were switched back to the chronological feed, their political opinion shifts persisted — showing minimal reversal of the change. The reason, according to the study, is structural: users who spent time on the algorithmic feed began following new accounts aligned with conservative perspectives. Those follows persisted after reverting to chronological mode, meaning the algorithm had permanently altered their information environment through behavioral changes that outlasted the algorithmic exposure itself.

Think of it as a one-way door. The algorithm introduces you to accounts you wouldn’t have found organically. You follow them. When the algorithm turns off, those accounts are still in your feed. The information diet has been changed at the source level — not just the curation level. This is what researchers might call a form of path dependence — the idea that a temporary exposure to a system permanently reshapes the landscape you navigate afterward.

X Twitter algorithm research
Photo by Pixabay on Pexels

The timing of this study — conducted in summer 2023, months after Elon Musk completed his acquisition of the platform — raises obvious questions about whether the algorithmic shift reflects deliberate editorial decisions or emergent properties of engagement-maximizing systems. The study itself doesn’t claim to answer that question directly. But the data is unambiguous about the outcome.

This research arrives in a moment when questions about algorithmic curation across social platforms are intensifying. Meta has been investing heavily in AI-driven feed recommendations. TikTok’s algorithm has faced congressional scrutiny. And yet X has received comparatively less rigorous independent study — in part because, unlike Meta’s collaboration with academics in 2020, X provided no cooperation for this research. The research team worked entirely from outside the platform.

The study also reframes a conversation that has been happening largely in the abstract. When we talk about passive scrolling habits and their psychological effects, we tend to focus on mental health outcomes — loneliness, anxiety, attention fragmentation. This work introduces something more specific and more measurable: political opinion formation happening beneath conscious awareness, driven by systems users didn’t choose and can’t fully see.

“What you see on social media is not a neutral reflection of the world or even of the accounts you choose to follow,” the researchers noted. “Algorithms actively shape your information diet, and those changes can stick — i.e., they are not easily reversible.”

There’s a psychological concept that applies here — the belief that your thoughts and opinions are genuinely your own, formed through deliberate reasoning and chosen exposure. The Nature study doesn’t just challenge this idea in the abstract. It provides experimental evidence that a platform’s default setting — one most users never think to change — is quietly rewriting the political information environment in a specific ideological direction.

The distinction between the Meta study and this one is crucial for anyone trying to understand what platforms actually do to political belief. Meta’s 2020 research suggested algorithms didn’t matter much for political attitudes. This finding on X suggests they matter enormously — but the effect may depend on which platform, which algorithm, and which moment in that platform’s evolution. The idea that “algorithms don’t change minds” was always a convenient conclusion for the industry. It’s now a contested one.

The feedback loops embedded in digital platforms are not neutral infrastructure. They are editorial systems with political consequences — systems that billions of people use as their primary window into public life. The Nature study doesn’t tell us what to do about that. But it makes the scope of the problem impossible to dismiss.

What the researchers have demonstrated is something quieter and more durable than a single piece of incendiary content going viral. They’ve shown that the default architecture of a platform — the feed you see if you change nothing, question nothing, simply open the app — is doing political work. Every day. On every user. Without asking permission and without leaving a mark visible enough for most people to notice.

The shifts are small per person. Aggregated across a platform with hundreds of millions of users during election cycles, they are something else entirely.

Feature image by Mikhail Nilov on Pexels

Picture of Rachel Summers

Rachel Summers

Rachel Summers is a behavioral psychology writer and cultural commentator based in New York. With a background in social psychology and over a decade of experience exploring why people think, act, and feel the way they do, Rachel's work sits at the intersection of science and everyday life. She writes about emotional intelligence, generational patterns, relationship dynamics, and the quiet psychology behind modern living.

MOST RECENT ARTICLES

Small businesses keep waiting for the perfect mobile moment — it already passed

USPS just made snail mail digital — and nobody noticed

What happens when your mail carrier wears a Staples polo — and why it should bother you

Billboards still work when you stop treating them like guesswork

List brokers know more about your customers than you do

Half your customers want integration — you’re still gambling on loyalty