A new spam wave is hitting brands—what to do before it spreads

Add DMNews to your Google News feed.
  • Tension: Brands are chasing authenticity by going “unfiltered” on social media while their own comment sections decay into scam-ridden wastelands.
  • Noise: The “block and report” reflex reduces a systemic brand trust crisis to an individual moderation chore anyone can handle.
  • Direct message: Spam is a revenue problem disguised as a nuisance problem, and brands that treat it otherwise are funding their own erosion.

To learn more about our editorial approach, explore The Direct Message methodology.

Here is an irony worth sitting with. The same year brands started opening “spam accounts” to seem more human and relatable, actual spam became the single greatest threat to their social media revenue. Netflix launched @Netflix2, its finsta-style account, to create a more intimate, two-way channel with younger audiences. Dunkin’ and Bloom also started brand spam accounts in 2025. The logic was sound: audiences crave authenticity, so give them something raw and casual. Show them the brand without the polish.

Meanwhile, under the polished posts on their main accounts, something far less charming was happening. Comment sections were filling with phishing links, fake giveaways, and bot-driven scam promotions. Samsung’s X account was hijacked to push a fraudulent cryptocurrency. Disney’s Instagram was compromised in a similar scheme where hackers promoted a fake cryptocurrency called “Disney Solana,” and reports suggest hundreds of fans were tricked into buying it. Elmo’s X account started posting hateful content that had nothing to do with Sesame Street.

During my time working with tech companies on growth strategy, I learned a hard lesson that keeps resurfacing in new forms: the gap between how a brand sees itself and how its audience actually experiences it is where trust goes to die. And right now, that gap is widening faster than most marketing teams realize.

The authenticity arms race and the rot underneath

There is a deep contradiction running through social media marketing in 2026. Brands have heard the message loud and clear: be real, be human, be unfiltered. The finsta movement among Gen Z, where users create secondary accounts to share unpolished content away from the pressure of their curated feeds, became a blueprint for corporate social strategy. Social media strategist Pretty Little Marketer highlighted that brands like Netflix, Dunkin’, and Bloom started brand spam accounts in 2025, and predicted more will follow in 2026 if they can secure the right team members and identify what their niche audiences want from that space.

The intent behind brand spam accounts is genuine. Build intimacy with your most loyal followers. Loosen the corporate voice. Create a space that feels like a group chat instead of a billboard. These are healthy instincts for brands trying to survive in a landscape where audiences increasingly expect direct, personal engagement from the brands they follow.

But here is the friction no one wants to talk about. While marketing teams pour creative energy into appearing less corporate, their existing channels are being colonized by forces that make the brand experience actively hostile. According to the 2025 Social Media Comment Insights Report from Respondology, which analyzed 118.4 million comments across more than 450 brands, nearly 30% of Meta Ads comments were hidden due to spam or toxicity. On TikTok Ads, that figure reached 44%. And 57.5% of all hidden comments in 2024 were classified as spam.

The identity friction here is sharp. Brands want to be seen as approachable and transparent. Yet their owned social spaces are becoming environments where followers encounter counterfeit product links, phishing schemes, and engagement manipulation before they ever see a genuine customer conversation. The brand’s self-image as “authentic and human” is clashing with the lived experience of scrolling past three crypto scam comments to find a real reply.

What I’ve found analyzing consumer behavior data is that this kind of disconnect compounds quickly. People don’t differentiate between “the brand was hacked” and “the brand doesn’t care.” The emotional register is the same: this space feels unsafe, and I’m leaving.

Why “block and report” is a dangerous oversimplification

Ask most marketers how they handle spam, and you’ll hear some version of the same playbook: monitor comments, block bad actors, report to the platform, repeat. It sounds responsible. It also misses the scale and sophistication of what’s actually happening.

AI-generated spam is getting harder to spot, and it’s scaling fast. According to KnowBe4’s Phishing Threat Trends Report, 82.6% of all phishing emails analyzed exhibited some use of AI, with polymorphic phishing tactics now present in 76.4% of all phishing campaigns. These are no longer the poorly worded “congratulations you won” messages that any human could spot. Modern social spam mirrors the visual language and tone of legitimate brand communication so closely that many people interact with it before realizing what it was. The era of easily identifiable junk is over.

The deeper problem with the “block and report” mentality is that it frames spam as a content moderation issue when it’s actually a business infrastructure problem. When your paid campaign drives traffic to a comment section full of phishing links, you’re effectively paying to expose your audience to fraud. Conventional wisdom also tells brands that spam is a platform problem, meaning it’s Meta’s job, or TikTok’s job, or X’s job to fix. This framing lets marketing teams off the hook and positions spam as an external force they can’t control.

The reality is different. Brands that have implemented proactive comment moderation on their ad campaigns have seen measurable improvements in return on ad spend and cost per click. These are performance numbers, tied directly to revenue, that shift when brands take ownership of their comment environments. The 2025 Spam Statistics Report from Orbit Media, which surveyed over 1,000 consumers, found that X (Twitter) has overtaken Facebook as the spammiest social network, reinforcing how quickly the landscape shifts and why a passive approach to spam leaves brands exposed across every platform.

The trend cycle also creates noise here. Every quarter brings a new distraction: a platform update, an algorithm shift, a viral content format. Marketing teams chase these signals because they feel urgent. Meanwhile, the slow rot of spam in their owned channels continues unaddressed, and it compounds. A comment section that feels unsafe today trains followers to skip comments tomorrow, which means fewer organic conversations, weaker social proof, and a feedback loop that degrades the brand’s most public-facing real estate.

The cost hiding in plain sight

Spam is a revenue problem wearing the disguise of a nuisance. Every unmoderated comment section is an open invitation for scammers to stand between your brand and your audience, eroding trust one interaction at a time. The brands that win in 2026 will be the ones that stop treating their comment sections as afterthoughts and start defending them as the conversion assets they are.

Building the infrastructure that matches the aspiration

The path forward requires aligning a brand’s internal operations with its external identity. If you want to be the approachable, authentic brand that launches a finsta and builds real relationships with your community, then the infrastructure protecting that community needs to match the ambition. Here’s what that looks like in practice.

First, treat comment moderation as a performance marketing function, not a community management afterthought. The data is clear: moderated channels produce measurably better ad performance. Assign the same rigor to comment quality that you apply to creative testing or audience segmentation. Build moderation into your campaign launch checklist with the same priority as your media buy.

Second, invest in always-on monitoring that covers off-hours. Comment volume and toxicity spike during evenings and weekends, precisely when most marketing teams log off. If your moderation strategy operates on a 9-to-5 schedule, you’re leaving your brand unprotected during its most vulnerable windows.

Third, stop outsourcing your trust to the platforms. Yes, Meta and TikTok have moderation tools. They are blunt instruments. Platform-native filters rely on broad categories, often missing the nuance of brand-specific threats. A comment asking “Does this come in XL?” is a sales opportunity. A comment saying “Check my profile for free products” is spam. The difference requires context that only brand-side tools can provide.

Fourth, audit your entire social ecosystem with the same seriousness you’d bring to a security review. Most brands have at least a handful of unmonitored accounts that create exposure. Every dormant account, every legacy page, every unmanaged comment section is an entry point for brand impersonation or scam propagation.

Finally, recognize that the authenticity your audience craves includes safety. The finsta movement among consumers arose because people wanted spaces that felt protected and intimate. When brands adopt that aesthetic without building the protective layer, they’re borrowing the form of trust without delivering the substance. Authenticity without security is a costume, and audiences figure out the difference faster than most brands expect.

The spam wave hitting brands right now is accelerating because AI makes it cheaper, faster, and harder to detect. Waiting for platforms to solve this is a bet against your own revenue. The brands that are thriving, the ones whose audiences feel comfortable engaging, buying, and advocating in their social spaces, are the ones that understood this simple equation early: every dollar spent on protecting your community is a dollar invested in keeping the trust that makes everything else possible.

Picture of Wesley Mercer

Wesley Mercer

Writing from California, Wesley Mercer sits at the intersection of behavioural psychology and data-driven marketing. He holds an MBA (Marketing & Analytics) from UC Berkeley Haas and a graduate certificate in Consumer Psychology from UCLA Extension. A former growth strategist for a Fortune 500 tech brand, Wesley has presented case studies at the invite-only retreats of the Silicon Valley Growth Collective and his thought-leadership memos are archived in the American Marketing Association members-only resource library. At DMNews he fuses evidence-based psychology with real-world marketing experience, offering professionals clear, actionable Direct Messages for thriving in a volatile digital economy. Share tips for new stories with Wesley at wesley@dmnews.com.

MOST RECENT ARTICLES

Psychology says the reason you feel uneasy about the TikTok deal isn’t paranoia — it’s your brain recognizing a protection racket dressed as governance

Small businesses keep waiting for the perfect mobile moment — it already passed

USPS just made snail mail digital — and nobody noticed

What happens when your mail carrier wears a Staples polo — and why it should bother you

Billboards still work when you stop treating them like guesswork

List brokers know more about your customers than you do