Your ad dollars are funding someone else’s propaganda and the algorithm doesn’t care

  • Tension: Brands preach values-driven marketing while their ad budgets silently bankroll misinformation ecosystems they claim to oppose.
  • Noise: The industry obsesses over impressions and reach while ignoring the toxic environments where those impressions actually land.
  • Direct Message: Every ad placement is a moral endorsement, and willful ignorance of where your dollars go is a strategy that erodes your brand from within.

To learn more about our editorial approach, explore The Direct Message methodology.

This article is part of our editorial archive. Originally published in 2017 by Kim Davis, it has been reviewed and updated to ensure accuracy and relevance for today’s readers.

You ran the numbers. You optimized the funnel. You A/B tested the creative until the click-through rate hit a number your team could celebrate over lunch. And while you were refining your targeting parameters and congratulating yourself on cost-per-acquisition, your ad dollars were quietly appearing next to a fabricated health scare, a conspiracy theory about election fraud, or an article designed to radicalize someone’s uncle.

You didn’t choose that placement. You didn’t approve it. But your brand logo sat right there, lending credibility to content engineered to mislead.

The algorithm delivered exactly what it was built to deliver: volume. It found eyeballs. It served impressions. What it didn’t do, and will never do, is ask whether the context surrounding your ad is corroding the very trust your brand depends on.

During my time working with tech companies in the Bay Area, I watched this play out from the inside. Growth teams celebrated record impression counts while brand safety teams scrambled to explain why the company’s display ads had surfaced on extremist forums.

The metrics looked immaculate. The reality was a reputational time bomb. This is the contradiction at the center of modern digital advertising: the tools we’ve built to reach people at scale are structurally indifferent to the consequences of that reach.

The Uncomfortable Subsidy Hidden in Your Media Budget

Let’s be honest about what programmatic advertising actually is. At its core, it’s an automated auction where algorithms bid for ad space across millions of websites in milliseconds. The system rewards engagement, traffic volume, and audience match. What it does not reward is editorial integrity, factual accuracy, or the social impact of the content it monetizes.

This creates a perverse incentive structure. Misinformation generates outrage. Outrage generates clicks. Clicks generate traffic. Traffic generates ad revenue. Your brand’s media budget, allocated with the best of intentions and the sharpest of targeting criteria, becomes a financial lifeline for publishers whose entire business model depends on manufacturing fear, confusion, and tribal rage.

A study published in JAMA Network Open made this dynamic painfully concrete. Neeraj G. Patel and fellow researchers found that advertising payments from government and health organizations to news websites publishing health misinformation may inadvertently support the spread of false information, potentially diminishing trust in those very organizations.

Think about the cruel irony: a public health agency spending money to promote vaccination awareness, only to have those dollars fund a website that publishes anti-vaccine content. The message undermines itself before it ever reaches its intended audience.

I keep a journal of marketing campaigns that failed spectacularly. I call it my “anti-playbook.” The most instructive entries aren’t about bad creative or poor targeting. They’re about brands that destroyed trust through context. A luxury skincare line whose ads ran alongside conspiracy content about chemical contamination. A children’s education platform whose banners appeared on a site promoting pseudoscientific parenting advice. In each case, the performance metrics were strong. The damage was invisible until it wasn’t.

The tension here is real and unresolved. Marketers say they care about brand values. They build elaborate brand guidelines, invest in purpose-driven campaigns, and publicly commit to social responsibility. Then they hand their media budgets to automated systems that treat a Pulitzer-winning investigation and a fabricated clickbait article as functionally identical inventory, as long as the CPM is right.

Why the “We Didn’t Know” Defense No Longer Works

The industry has produced a staggering amount of noise around this problem, most of it designed to create the illusion of action while preserving the profitable status quo.

Brand safety tools, keyword blocklists, domain exclusion lists, third-party verification vendors: an entire cottage industry has emerged promising to solve the problem of ad adjacency. And yet the problem persists, because these solutions address symptoms while leaving the underlying architecture untouched.

Consider the conventional wisdom: “Just blacklist the bad sites.” This sounds reasonable until you realize that misinformation doesn’t live on a static list of known bad actors. It migrates. It rebrands. It appears on sites that were perfectly reputable last Tuesday but published something reckless this morning.

Keyword blocklists are equally blunt instruments. Block the word “shooting” and you exclude legitimate news coverage. Block “election fraud” and you miss the investigative journalism debunking the very claims you want to avoid. The tools are playing whack-a-mole against a system that generates new moles faster than any blacklist can update.

Kim Davis, who originally wrote this article back in 2017, captured the core dilemma with clarity: “Marketers and advertisers don’t want to waste dollars on campaigns that are invisible, or impressions that are fraudulent. And beyond that, they don’t want to spend money pushing messages into environments their customers will find inappropriate, offensive, hostile, or risky.” That statement was published years ago. The fact that it remains perfectly relevant tells you everything about how little structural progress has been made.

What I’ve found analyzing consumer behavior data is that the damage goes deeper than brand perception surveys typically capture. Consumers don’t always consciously register which website an ad appeared on. But the associative machinery of the brain is always running. Repeated exposure to a brand in low-trust environments creates a subtle, cumulative erosion of credibility that shows up later as unexplained dips in conversion rates, declining brand favorability scores, and a vague consumer sense that something about the brand feels “off.” The damage is real. It’s measurable. And it’s happening to brands that believe their verification tools have the problem handled.

The Clarity Beneath the Complexity

Every dollar you spend on advertising carries an implicit endorsement of the environment where it appears. The algorithm will never care about that endorsement. You have to. Reclaiming control of your media supply chain is the most consequential brand decision you’re avoiding.

Research from Marco Visentin, Associate Professor of Management and Marketing at the University of Bologna, published in the Journal of Interactive Marketing, confirms what intuition suggests: consumers’ perceptions of fake news can negatively affect their behavioral intentions toward adjacent brand advertisements, especially when the news source is deemed credible.

The contamination flows from content to brand, and the more legitimate the misinformation source appears, the worse the spillover effect. Your ad doesn’t exist in a vacuum. It exists in a context. And context is shaping your brand story whether you’re directing it or not.

Building a Media Strategy That Reflects What You Actually Stand For

Growing up in a small town in Oregon where the nearest mall was two hours away, I developed a particular skepticism about the distance between what companies say and what they do.

When a brand tells you it cares about community, about truth, about doing the right thing, and then funnels advertising revenue to publishers trafficking in disinformation, that gap between stated values and actual behavior becomes the real brand story. Consumers may not articulate it in those terms, but they feel it.

So what does reclaiming control actually look like in practice?

First, it means treating your media supply chain with the same rigor you apply to your physical supply chain. No serious consumer goods company would shrug and say, “We don’t really know which factories make our products.” Yet that’s essentially what brands do when they hand budgets to demand-side platforms and accept whatever inventory the algorithm selects.

Demand transparency. Audit placements manually, not quarterly but continuously. If your verification vendor can’t tell you exactly where every impression ran, find one that can.

Second, it means accepting that brand safety has a cost. Restricting placements to verified, high-quality publishers will reduce your reach. Your CPMs will increase. Your impression counts will drop. And your actual business outcomes will likely improve, because you’ll be reaching real people in environments where they’re receptive, attentive, and predisposed to trust the messages they encounter.

Third, it means rethinking the metrics that define success. If your primary KPIs are reach and impressions, you are optimizing for the exact dynamics that make this problem worse. Shift toward engagement quality, brand lift in verified environments, and customer lifetime value attributed to specific publisher categories. The data infrastructure to do this exists. What’s often missing is the organizational will to prioritize it over vanity metrics that look impressive in a slide deck.

Finally, and this may be the hardest part, it means accepting responsibility. The “we didn’t know” era is over. The research is public. The mechanisms are well understood. Every brand that continues to pour budget into opaque programmatic channels without rigorous supply-chain oversight is making a choice. It may be a passive choice. It may be an uninformed choice. But it is a choice, and consumers are increasingly unwilling to grant the benefit of the doubt to brands that profit from ignorance they could have corrected.

The algorithm will continue to optimize for what it was designed to optimize for. It will never develop a conscience. That responsibility belongs to the people who write the checks. The question for every marketer is simple and uncomfortable: does your media strategy reflect your values, or does it fund their opposite?

Picture of Wesley Mercer

Wesley Mercer

Writing from California, Wesley Mercer sits at the intersection of behavioural psychology and data-driven marketing. He holds an MBA (Marketing & Analytics) from UC Berkeley Haas and a graduate certificate in Consumer Psychology from UCLA Extension. A former growth strategist for a Fortune 500 tech brand, Wesley has presented case studies at the invite-only retreats of the Silicon Valley Growth Collective and his thought-leadership memos are archived in the American Marketing Association members-only resource library. At DMNews he fuses evidence-based psychology with real-world marketing experience, offering professionals clear, actionable Direct Messages for thriving in a volatile digital economy. Share tips for new stories with Wesley at [email protected].

MOST RECENT ARTICLES

Warren Buffett has said the same five things for fifty years and most people are still waiting for a more complicated version to trust

Psychology says people who find it easier to be kind to strangers than to family aren’t cold — they’re carrying something unprocessed

The wellness industry grew by $1.5 trillion while people got measurably less well — that’s not a coincidence

What happens to people who spend decades being needed by everyone — and then suddenly aren’t

The reason your product team keeps missing what users actually need

Why the foods and diets that get the most media attention are almost never the ones with the strongest evidence behind them