Scientists say X’s algorithm doesn’t just show you conservative content — it rewires who you follow so the shift becomes permanent

Scientists say X's algorithm doesn't just show you conservative content — it rewires who you follow so the shift becomes permanent
  • Tension: X’s recommendation algorithm doesn’t just surface content — it measurably shifts users’ political opinions to the right, and switching back to the chronological feed doesn’t undo the shift.
  • Noise: Platform defenders frame algorithmic feeds as neutral content curation tools, while critics treat all social media influence as equivalent. Neither accounts for the one-directional, irreversible nature of what this study found.
  • Direct Message: The algorithm doesn’t need to stay on to keep working. It changes who you follow, and those follows persist — meaning a temporary exposure to algorithmic curation permanently restructures your information environment in ways you may never recognize.

To learn more about our editorial approach, explore The Direct Message methodology.

A field experiment involving thousands of Americans has produced what may be the most unsettling finding yet about social media’s role in political life: X’s algorithmic feed doesn’t just surface content you’re likely to engage with — research suggests it measurably shifts political opinions to the right. And switching back to the chronological feed doesn’t undo the damage.

Research involving active X users based in the United States has found concerning patterns. Participants were randomly assigned to either use the platform’s default algorithmic feed or switch to the chronological feed — then researchers measured what happened to their political attitudes over the course of the experiment. The results weren’t subtle. Exposure to the algorithmic feed appeared to shift users’ political opinions in a conservative direction. Small by statistical convention. Not small when you scale it across hundreds of millions of users.

social media algorithm politics
Photo by Anna Shvets on Pexels

“Feed algorithms decide what billions of people see on social media every day,” researchers have noted. “Whether they also shape what people think is one of the most important open questions in the social sciences.”

What makes this research different from previous studies — and what gives it its particular sting — is what it tested. Previous research conducted in collaboration with Meta during the 2020 U.S. election found that turning off Meta’s algorithm had no measurable effect on political attitudes. This newer study flipped the question: what happens when you turn the algorithm on? The distinction matters enormously. As I covered previously, this is the first quantitative study to test algorithmic activation rather than deactivation — and the asymmetry it revealed is striking.

“The most striking finding for us was the asymmetry,” researchers have reported. “We expected the algorithm to have some effect on political attitudes, but we did not expect the effects to be so clearly one-directional: switching the algorithm on shifted opinions, but switching it off did not reverse them.”

That irreversibility is the finding that should keep platform regulators up at night. The mechanism researchers identified is deceptively simple: X’s algorithm exposes users to conservative content creators and political activists they wouldn’t otherwise encounter. Users then follow those accounts. Once followed, those voices appear in the chronological feed too — meaning the algorithm’s influence persists even after a user opts out of algorithmic curation. The algorithm doesn’t just show you things. It changes who you listen to. And that change is sticky.

Research has also found that X’s algorithm actively promotes conservative content and political activists while demoting traditional news media outlets. Posts in the algorithmic feed received substantially more likes, reposts, and comments than those in the chronological feed — a dynamic that reinforces the visibility of politically charged content over journalistic reporting. The algorithm, in other words, doesn’t just have a political lean. It has a media preference — and that preference runs away from institutional journalism and toward individual political voices.

political opinion shift data
Photo by Edmond Dantès on Pexels

This matters for reasons that extend well beyond partisan politics. What this research describes is something I’d call algorithmic path dependence — the idea that a short exposure to a recommendation system can permanently alter someone’s information environment in ways that compound over time. You don’t need to stay on the algorithmic feed forever. You just need to be on it long enough to follow a handful of new accounts. Those accounts become part of your self-curated world. The algorithm wrote itself into your choices, then stepped back and let the choices do the work.

The research was reportedly conducted independently without cooperation from or data access provided by X. That independence is notable — and it raises a question about what platform-funded research might or might not choose to measure. When Meta partnered with academics to study its own algorithm, the conclusion was reassuringly benign: turning it off didn’t change people’s minds. This independently funded work, asking a different question of a different platform, reached a far less comfortable answer.

There’s a phrase gaining traction in digital rights circles — cognitive sovereignty, the idea that individuals have a right to form their own beliefs without invisible manipulation. The concept applies to Meta’s AI ambitions as much as it does to X’s recommendation engine. But this research gives it empirical teeth. If an algorithm can shift political opinions in a single direction, and if that shift survives the algorithm’s removal, then the question of what counts as manipulation — versus what counts as mere content curation — becomes urgent in ways the tech industry has not yet been forced to address.

Researchers have emphasized: “What you see on social media is not a neutral reflection of the world or even of the accounts you choose to follow. Algorithms actively shape your information diet, and those changes can stick — i.e., they are not easily reversible.”

The implications ripple outward. For the millions of heavy scrollers who consume social media passively — never posting, just absorbing — the algorithmic feed isn’t a tool they’re using. It’s a tool being used on them. And the rightward shift documented in this research isn’t a conspiracy or a content moderation failure. It appears to be a structural property of how X’s recommendation system works — one that the platform’s owner, Elon Musk, has shown no public interest in correcting.

What compounds the concern is the broader AI landscape in which this finding arrives. Recommendation algorithms are only becoming more sophisticated. They’re learning not just what you click on but how long you hover, what makes you pause, what emotional register keeps you scrolling. If a relatively simple feed-ranking algorithm can produce measurable ideological shifts — shifts that don’t reverse — then the next generation of recommendation systems, powered by large language models and real-time behavioral prediction, will be operating on a population that has no framework for recognizing what’s happening to it.

This research doesn’t tell us whether X’s rightward push is intentional or emergent — whether it reflects a deliberate editorial choice or an optimization function that happens to reward conservative content because it generates more engagement. Both possibilities are troubling, but they require different responses. An intentional bias is a governance problem. An emergent one is an engineering problem. Either way, the people absorbing the shift didn’t choose it, didn’t notice it, and — according to the data — can’t easily undo it.

That’s the part that lingers. Not that an algorithm has a political direction — many of us suspected as much. The part that lingers is the irreversibility. The idea that you can opt out of the algorithmic feed, return to chronological order, reclaim your attention — and still be carrying the fingerprints of a system you thought you’d left behind. The follows you made. The voices you internalized. The slow, invisible recalibration of what sounds reasonable and what sounds extreme.

An algorithm doesn’t need to change your mind permanently to change your life permanently. It just needs to change who you’re listening to. That, according to research with thousands of Americans, is exactly what it does.

Feature image by Kerde Severin on Pexels

Picture of Rachel Summers

Rachel Summers

Rachel Summers is a behavioral psychology writer and cultural commentator based in New York. With a background in social psychology and over a decade of experience exploring why people think, act, and feel the way they do, Rachel's work sits at the intersection of science and everyday life. She writes about emotional intelligence, generational patterns, relationship dynamics, and the quiet psychology behind modern living.

MOST RECENT ARTICLES

The wellness industry grew by $1.5 trillion while people got measurably less well — that’s not a coincidence

What happens to people who spend decades being needed by everyone — and then suddenly aren’t

The reason your product team keeps missing what users actually need

Why the foods and diets that get the most media attention are almost never the ones with the strongest evidence behind them

The truth about ‘cheap’ expat life in Mexico—what TikTok doesn’t tell you

The art of honest conversation: the one shift that makes people finally feel heard