Researchers discover AI language models are getting better at predicting what headlines people click

Add DMNews to your Google News feed.
  • Tension: We believe we’re immune to manipulation—yet we consistently click on headlines designed to exploit our psychology.
  • Noise: The endless debate about AI replacing journalists obscures a more uncomfortable question: what does our preference for AI-crafted headlines reveal about us?
  • Direct Message: The real insight isn’t that machines can manipulate us—it’s that effective communication has always been about understanding what people actually want, not what they say they want.

To learn more about our editorial approach, explore The Direct Message methodology.

Here’s an uncomfortable finding from a recent study: when given a choice between headlines written by professional journalists and those generated by ChatGPT, readers preferred the AI-generated versions over 70% of the time.

The recent study, published in Journalism and Media, didn’t set out to prove AI superiority. It set out to understand what makes headlines work. What it found instead was a mirror reflecting something about ourselves we might not want to see.

The researchers took 100 articles with human-written clickbait headlines from major Romanian news outlets. They then had ChatGPT generate two alternative headlines for each article—one designed to be clickbait, one designed to be purely informative. Six hundred university students evaluated the results.

The outcome challenges our assumptions about authenticity and manipulation in media. ChatGPT’s clickbait headlines came out on top (37.5%), followed by its informative headlines (33.4%), with the original human-written headlines trailing at just 29.2%. This wasn’t about AI mimicking humans well enough to fool us. It was about AI understanding and delivering what we actually respond to.

The Mechanics of Attention

What makes a headline compelling? The study’s linguistic analysis reveals some predictable patterns: dynamic verbs that suggest action (“threatens,” “reveals,” “explodes”), emotionally charged adjectives (“shocking,” “incendiary”), and structures that create what researchers call the “curiosity gap”—the space between what you know and what you want to know.

AI, it turns out, is particularly good at deploying these techniques with precision. Where human writers might hedge or bury the hook, ChatGPT goes straight for the psychological lever.

Consider the difference the researchers documented. A human journalist wrote: “When is it advisable to read the Nativity Akathist. It is considered the most powerful prayer for Christmas.” Informative, certainly. But ChatGPT’s version: “The most powerful Christmas prayer that brings you health and peace! Find out when to read the Nativity Akathist!” Same information, entirely different emotional register.

The AI version does something the human version doesn’t: it promises a benefit, creates urgency, and directly addresses the reader. It’s more manipulative, yes — but also, apparently, more effective at capturing attention.

The Contradiction We Live With

Here’s where the study’s findings become genuinely interesting. When the same participants were asked what they valued in headlines, over 54% said clarity and accurate representation of content was “very important.” Only 15.5% rated “shocking or surprising” headlines as very important. Just 13.4% highly valued “neutral and objective” framing.

And yet.

When it came to actual behavior (which headlines made them want to click) the pattern reversed. Less than 34% chose the neutral, informative option. The majority gravitated toward headlines engineered to provoke emotional response.

This gap between stated preferences and revealed preferences is well-documented in behavioral psychology. We say we want substance; we click on spectacle. We claim to value objectivity; we respond to provocation. The study quantifies this contradiction with uncomfortable precision.

Meanwhile, the frustration is real. Nearly two-thirds of participants reported encountering misleading headlines “frequently” or “very often.”

Over 80% said they felt frustrated when a headline didn’t match the article’s content. We’re annoyed by the manipulation — and we reward it anyway.

What Conventional Wisdom Gets Wrong

The AI-in-journalism conversation has been dominated by a false binary: either AI will replace human writers, or it won’t. Either AI content is authentic, or it’s manipulative.

This framing misses the point entirely.

The study’s authors note that AI-generated headlines don’t succeed because they’re deceptive in ways human headlines aren’t. They succeed because they’re more consistently, more precisely calibrated to how attention actually works. Human journalists have always known that headlines are a form of persuasion — the best headline writers in any newsroom have always been those who understand reader psychology.

AI just removes the friction.

There’s also a convenient myth that audiences want “pure information” and it’s only commercial pressures that push media toward sensationalism. The data suggests otherwise. When given three options: human clickbait, AI clickbait, and AI informative — readers chose some form of clickbait nearly 67% of the time. The demand exists independently of the supply.

This doesn’t absolve media organizations of responsibility. But it does complicate the narrative that positions audiences as passive victims of algorithmic manipulation.

The Direct Message

The real insight isn’t that machines can manipulate us. It’s that effective communication has always been about understanding what people actually want, not what they say they want.

What This Means for Anyone Who Writes for Attention

The study’s implications extend far beyond newsrooms. Anyone creating content for digital audiences—marketers, founders, educators, anyone competing for attention in crowded information environments—confronts the same tension.

The lesson isn’t “use AI to write manipulative headlines.”

The lesson is subtler: understand the gap between your audience’s stated preferences and their actual behavior, then decide consciously where you want to position yourself on that spectrum.

Some practical clarity from the findings: readers consistently valued headlines that “pique interest and stimulate curiosity” (over 74% rated this highly) and that “contain important information” (75% rated highly). The winning combination isn’t pure clickbait or pure information — it’s genuine substance delivered with psychological awareness.

What AI does exceptionally well is the execution of known principles. What it doesn’t do—yet—is make the ethical and strategic judgment about when to deploy which technique, for which audience, in service of what goal.

That judgment remains human territory. So does the responsibility.

The researchers acknowledge their findings come with limitations—a single country, a young demographic, one AI model.

More research will follow. But the core finding resonates because it confirms what anyone who’s worked in digital media already suspects: we’re all playing a game of attention economics, and AI is getting better at that game faster than most of us are comfortable admitting.

The question isn’t whether to use these tools. The question is whether you understand what you’re optimizing for when you do.

Picture of Wesley Mercer

Wesley Mercer

Writing from California, Wesley Mercer sits at the intersection of behavioural psychology and data-driven marketing. He holds an MBA (Marketing & Analytics) from UC Berkeley Haas and a graduate certificate in Consumer Psychology from UCLA Extension. A former growth strategist for a Fortune 500 tech brand, Wesley has presented case studies at the invite-only retreats of the Silicon Valley Growth Collective and his thought-leadership memos are archived in the American Marketing Association members-only resource library. At DMNews he fuses evidence-based psychology with real-world marketing experience, offering professionals clear, actionable Direct Messages for thriving in a volatile digital economy. Share tips for new stories with Wesley at wesley@dmnews.com.

MOST RECENT ARTICLES

The simplest way to increase revenue per subscriber in 30 days

A new spam wave is hitting brands—what to do before it spreads

8 online shopping improvements and what makes them actually work

A major email rule change is reshaping marketing—here’s what it means

If you can afford these 6 ad tests without checking ROAS first, your margins are healthier than you think

Marketing psychology says the reason your ads stop working has nothing to do with the algorithm