- Tension: We built an intimate relationship with AI assistants, trusting them with our deepest questions, and now we’re watching that trust become a commodity.
- Noise: The debate over ChatGPT advertising has been framed as a simple binary of user experience versus corporate greed, missing the deeper psychological stakes entirely.
- Direct Message: The anxiety keeping you up at 3am wondering about AI advertising is the same anxiety you’ve always had about being sold to by something you thought was on your side.
To learn more about our editorial approach, explore The Direct Message methodology.
It’s 3am. Your phone is on the nightstand, dark. But your mind is somewhere else entirely, replaying a conversation you had earlier with your AI assistant. You asked it for hotel recommendations for an anniversary trip. It suggested three options. One sounded perfect.
And then the thought crept in: Was that recommendation real, or was it paid for?
You’re not alone in this particular brand of midnight anxiety. OpenAI announced in January 2026 that advertisements would begin appearing in ChatGPT for free users and those on the lower-cost Go tier. The response ranged from resigned acceptance to quiet alarm. Senator Edward Markey sent letters to major AI companies warning that advertising in chatbot conversations could “deceive users, play upon their emotional connection with the chatbot, and undermine their privacy.”
But the policy debates miss something essential happening beneath the surface. What keeps people awake isn’t the presence of ads themselves. It’s the realization that the tool they’ve been treating like a confidant might become something else entirely.
In my research on digital well-being, I’ve observed how people form relationships with AI assistants that differ fundamentally from their relationships with search engines. Nobody lies in bed wondering if Google betrayed them. The intimacy is different. And that intimacy is precisely what makes the advertising question so charged.
Where trust becomes territory
The tension here runs deeper than user experience metrics. It’s about the nature of the relationship itself.
When you ask ChatGPT for advice on a medical symptom, a career decision, or what to cook for dinner given your dietary restrictions, you’re engaged in something that feels more like conversation than transaction. Research from Humanities and Social Sciences Communications has demonstrated that people unconsciously apply interpersonal interaction norms to AI chatbots, treating them less like tools and more like entities with perceived empathic abilities.
This creates what psychologists call a parasocial relationship, the same phenomenon that makes viewers feel genuine connection to television characters. But with AI, there’s reciprocity. It responds to you. It remembers your preferences. It adapts to your communication style.
OpenAI knows this. Sam Altman has acknowledged that “people have a very high degree of trust in ChatGPT,” adding, with disarming candor, “which is interesting because AI hallucinates. It should be the tech that you don’t trust that much.”
The company now serves approximately 800 million weekly active users. Only about 5% pay for subscriptions. The remaining 95% consume computational resources without generating direct revenue, a model the Financial Times famously described as an “era-defining money furnace” that burned through nearly $8 billion in 2025.
Something had to give. The question was never if advertising would arrive, but what would remain of the relationship when it did.
The stories we tell ourselves about objectivity
The noise around ChatGPT advertising has been almost comically polarized. On one side: apocalyptic warnings about manipulation, surveillance capitalism’s final conquest, the death of authentic recommendation. On the other: reassurances that ads will be clearly labeled, that they’ll never influence responses, that users can simply pay to avoid them.
Both narratives miss the psychological core of the issue.
OpenAI has been explicit about its principles. Ads won’t influence ChatGPT’s answers. Conversations will remain private from advertisers. Users under 18 won’t see ads. Higher-tier subscriptions stay ad-free. These promises sound reasonable, even protective.
But as researchers at The Conversation have observed, AI advertising represents something unprecedented: the potential to influence thinking, spending patterns, and personal beliefs in ways that are “woven invisibly into the fabric of conversation itself.”
Here’s the paradox that the conventional discourse ignores: the mere existence of advertising changes the relationship, regardless of whether ads actually influence responses. Once you know ads are present, every recommendation becomes suspect. When your AI assistant suggests a hotel, you’ll wonder. When it recommends a restaurant, you’ll wonder. The wondering itself is the damage.
This is the same dynamic that undermines trust in any relationship. It’s not whether your partner actually lied to you. It’s whether you now find yourself checking.
The clarity beneath the static
The anxiety keeping you up at 3am isn’t about AI advertising at all. It’s about the ancient human fear of discovering that something you thought was working for you was actually working for someone else.
What we’re actually mourning
There’s a reason the 3am wake-up has become cultural shorthand for anxiety. Research shows that middle-of-night waking affects roughly 35% of American adults regularly, and the hours between 2am and 4am are neurologically primed for catastrophic thinking. Cortisol begins its daily climb during these hours, activating threat-detection systems in brains that should be resting.
What makes this particular anxiety, the one about your AI assistant, so potent is its specificity. You’re not worried about abstract corporate malfeasance. You’re worried that something felt personal and now might feel transactional.
I’ve observed in my research on information overload that digital relationships create genuine emotional stakes precisely because they simulate care. When Netflix recommends a show, you don’t feel betrayed if you hate it. When ChatGPT recommends a therapist and you wonder if that recommendation was sponsored, something more intimate is threatened.
The companies understand this perfectly. OpenAI’s stated principle that they “do not optimize for time spent in ChatGPT” is a direct response to the attention economy’s worst excesses. But stating a principle and maintaining it under revenue pressure are different achievements.
What might actually help is neither the paranoid rejection of AI tools nor the naive acceptance of corporate assurances. It’s the recognition that relationships, even relationships with software, require ongoing negotiation of terms.
The users who will navigate this transition best are those who treat AI assistants the way wise people treat any relationship with asymmetric power: with warmth, utility, and a healthy awareness of incentives. You can value a conversation while acknowledging that your conversation partner has interests beyond your well-being.
Altman once called advertising a “last resort.” That language is telling. It implies desperation, not innovation. What we’re witnessing is the AI industry learning the same lesson every previous technology platform learned: scale without sustainable revenue is a time bomb.
The platforms that prevail will be those that figure out how to monetize without monetizing trust itself. Whether that’s possible remains genuinely uncertain. But the uncertainty is the point. The companies don’t know either.
So the next time you wake at 3am with that particular anxiety, the one about whether your AI is still on your side, consider this: the question itself is a sign of how much these relationships have come to matter. That’s worth acknowledging, even as you decide how much weight to give a recommendation from any source with something to gain from your choices.
The direct message here isn’t that AI advertising will ruin everything or that it will be fine. It’s that you’ve been in this situation before, with every relationship where trust and commerce intersect. You’ve navigated it with doctors who get referral bonuses, with friends who sell products, with media that needs advertising to exist.
The only difference now is that the entity doing the recommending can remember every conversation you’ve ever had with it. And that, perhaps, is what’s really keeping you up at night.