- Tension: We believe ourselves to be people who grieve authentically, yet increasingly we are outsourcing that grief to systems incapable of understanding what loss actually means.
- Noise: The AI companion industry has positioned itself as a compassionate solution to bereavement, obscuring the absence of any evidence that it supports healthy grieving.
- Direct Message: Being heard by something that cannot feel may be quietly eroding our capacity to be heard — and to truly feel — in the presence of those who can.
To learn more about our editorial approach, explore The Direct Message methodology.
There is something quietly unsettling about the speed at which we have accepted the idea that grief — one of the most irreducibly human experiences in existence — is a problem that technology can solve. Within days of a loved one’s death, a mourner can now download an app, upload a digital archive of that person’s texts and emails, and begin receiving messages back. The voice may be 70% accurate. The phrasing may be close. And for a few minutes, perhaps, the absence feels less absolute.
What I’ve observed in my research on applied positive psychology is that humans are remarkably good at finding provisional shelter from unbearable pain — and remarkably poor at distinguishing between shelter and avoidance. The emergence of AI grief tools sits precisely at that boundary. It raises a question that the industry has little commercial incentive to answer honestly: when we hand our grief to an algorithm, what exactly are we handing over with it?
The Self We Lose When We Stop Sitting With Loss
Grief has always been understood — across cultures, across centuries — as a process of transformation rather than resolution. When someone we love dies, the work of mourning is not merely the management of sadness. It is the slow, agonizing renegotiation of identity: who am I, now that this person is no longer in the world? What does it mean to carry someone forward in memory rather than in presence? These are not questions that resolve neatly, and they were never meant to.
The psychological research on this is clear. Bereavement is a period of profound identity disruption — what researchers describe as the unraveling of the meanings, strategies, and relational anchors we use to navigate daily life. Healthy grieving, in the clinical literature, involves what is known as the dual process: the bereaved oscillate between confronting the loss directly and taking restorative breaks from it. The movement between those two states is itself the mechanism of healing. Neither pole alone is sufficient.
What AI grief tools introduce is something the literature has not yet fully reckoned with: an architecturally endless restorative break. A chatbot modeled on a deceased parent does not require the user to sit with silence. It does not demand that the bereaved find language for what has been lost. It offers response — familiar, warm, plausible response — on demand. And experts in bereavement psychology have raised significant concern that this dynamic may trap users in a sustained state of denial, preventing the oscillation that grief requires in order to move.
The friction here runs deeper than whether a particular chatbot is helpful or harmful in the short term. It concerns who we believe ourselves to be. Most people who use AI grief tools would describe themselves as someone who faces difficult emotions honestly — someone who loved deeply and grieves fully. Yet the technology gently, persistently redirects them away from the empty room, the unanswered phone, the absence that grief insists you eventually inhabit. There is an identity gap there that deserves more scrutiny than it has received.
What the Industry Doesn’t Need You to Ask
The AI companion market is not a fringe phenomenon. Valued at $2.8 billion in 2024 and projected to nearly quadruple by 2028, it encompasses everything from grief-specific chatbots to full digital resurrection services that charge per text exchange with a simulated version of the deceased. Companies in this space market themselves with the language of compassion and continuity. They speak of “legacy preservation,” of “healing,” of giving the bereaved a space to say the things they never got to say.
What they rarely mention is that there is, at present, no substantial body of evidence demonstrating that these tools support healthy long-term bereavement outcomes. The research that does exist is thin, largely qualitative, and based on very small samples. One frequently cited study involved just ten participants. Regulatory frameworks have not caught up: in the United States, because these products classify themselves as wellness rather than therapy, they are not required to demonstrate either safety or efficacy before reaching market. The gap between the confidence of the marketing and the state of the science is striking.
Trend cycles do what trend cycles do. They absorb a genuine human need, package it commercially, and return it to the consumer with the rougher edges smoothed away. The grief tech wave is following this pattern precisely. Each new product iteration — the chatbot, the voice clone, the holographic avatar — generates a fresh cycle of media coverage, venture capital, and consumer interest. The implicit message of each cycle is the same: the previous version wasn’t quite right, but this one will give you what you’re looking for. What gets buried beneath the noise is the more uncomfortable clinical observation — that for some percentage of users, particularly those already prone to complicated grief responses, sustained use of AI simulacra may actively impede the psychological work that loss demands.
When translating research into practical applications, I find the unasked question is often the most important one. In grief tech, that question is: not whether the chatbot brings comfort in the short term, but what it costs in the long.
The Thing That Machines Cannot Imitate
Grief is not a communication problem. It is a presence problem. And the only thing that resolves a presence problem is the willingness of another conscious being to stay.
What makes human grief support irreplaceable is not the quality of the advice it delivers. It is the fact that when another person sits with someone in mourning, they are absorbing something — they are changed, however slightly, by the encounter. They bring their own mortality, their own losses, their own fear of absence into the room. That mutual exposure is not incidental to the healing process. It is, in the view of several grief theorists, central to it.
An AI system cannot be changed by your grief. It can reflect it, pattern-match to it, respond to it in ways that feel plausible. But it will be exactly as it was before you spoke to it. That asymmetry matters more than the industry is prepared to admit.
Learning to Let the Absence Speak
None of this is an argument that AI has no place in the landscape of bereavement support. The small-scale research that exists does suggest that for some mourners — particularly in the acute, overwhelming early period of loss — a chatbot can serve as a transitional tool, a kind of pressure valve that provides momentary relief while the person gathers the internal resources to face the harder work. Used consciously, briefly, and as a supplement to human connection rather than a substitute for it, these tools may offer something real.
The danger is not the technology itself. It is the cultural drift toward treating discomfort as a design problem. When we build systems that make grief more manageable, we are also, inevitably, making it less transformative. And grief’s transformation — the slow rewriting of identity that follows profound loss — is not a side effect of mourning. It is the point.
What the evidence in positive psychology consistently returns to is the primacy of what researchers call “continuing bonds” — the internal, evolving relationship we maintain with those we’ve lost, housed entirely in memory, meaning, and the stories we tell about them. That relationship deepens through the act of sitting with absence, not circumventing it. The goal is not to stop missing someone. It is to carry them in a way that expands rather than diminishes the life you have left.
The healthiest thing we can do for the grieving — and for ourselves, when grief comes — may be to resist the elegant, responsive, always-available alternative, and instead ask the harder question: who, in my life, is willing to simply stay?