Can AI Replace Psychotherapy?
The case against AI as therapist and what we get wrong about real psychotherapy.
Now that I can’t seem to escape social media posts about “AI therapy,” it feels like the right time to write about a few of the problematic aspects of this trend, and to clear up some of the misinformation floating around.
This isn’t another negative take on AI, nor is it a contribution to the fear-driven debates that tend to dominate the AI conversation. I’m not against AI at all. I use it in my work and everyday life. I’ve seen how it can assist people in countless areas, and I’m genuinely hopeful about its potential. At the same time, I keep up with the latest research, and I’m aware that it’s prone to hallucinations, carries gender and racial biases, and risks amplifying those biases as people interact with it.
Those are serious concerns, no doubt, but the issue I want to focus on is AI’s limitations when it comes to mental health care, and the ways we’re misunderstanding, or misrepresenting, what it can and cannot do. Not just today, but in the future too. I’m no fortune teller, but I’ve been in this field long enough to recognise when we’re getting ahead of ourselves.
There’s a trendy belief circulating in certain corners of mental health, tech culture, and inevitably, social media that artificial intelligence can become a new kind of therapist—perhaps even a better one. And I want to believe this comes from understandable and good intentions. After all, mental health care remains too expensive, too inaccessible. And as I discussed previously, some practitioners, due to lack of effective training, can do more harm than good.
The problem, to me, isn’t the idea that AI might assist in mental health work, or help researchers make breakthroughs. It’s that too many people confuse psychotherapy with all sorts of other things. And I say, and will throughout this post, psychotherapy deliberately, because these days everything seems to be an extension of “therapy,” which is one of the reasons of the confusion. AI-therapy, light-therapy, forest-therapy, pet-therapy…the word’s been stretched to cover far too much.
I don’t doubt that AI can mimic certain aspects of manualised, protocol-based mental health interventions: providing psycho-education, suggesting strategies, assigning homework, guiding exercises, and listening. Many language models and mental health apps already attempt this, and while it’s a promising area in theory, in practice their quality and effectiveness remain questionable. The research is still in its infancy, but such tech-based interventions, though not many are AI-driven, are full of limitations, ethical concerns, poor quality and lack of scientific basis, and even carry potential harm.
But we shouldn’t confuse those things with real, meaningful psychotherapy. Replacing a psychotherapist with AI may be tempting and I understand why. My argument is that whatever AI delivers, it won’t be what I—and many psychotherapists—would call psychotherapy. It could be one or more of the things I just listed. And if that’s what it is, we should be honest about it and call it something else, because AI-therapy is a misleading and confusing term, and believing that AI can truly mimic real psychotherapy is an illusion.
There are countless reasons I feel this way, but the most crucial one is this: psychotherapy is a relationship. I know people hear that phrase a lot—and it probably sounds like one of those fancy, abstract ideas practitioners like to toss around.
Let me explain what it means and why that matters in this AI therapy discussion.
Psychotherapy is a relationship between two human beings. It isn’t an exchange of advice or a series of soothing affirmations, but a living, breathing, relational encounter. And the importance of this connection to therapeutic outcomes—whether psychotherapy is successful—has been demonstrated in countless studies, such as this meta-analysis.
Since psychotherapy began more than a century ago, most schools of thought have recognised this. In How and Why Are Some Therapists Better Than Others, psychologist and researcher Bruce Wampold and colleagues show countless evidences that theoretical orientation and specific techniques, while helpful, are far less important than the fundamental relational factors: the quality of the therapeutic alliance and the personal presence of the therapist. It’s this human relationship that makes meaningful, lasting change possible.
What makes this relationship unique, and fundamentally different from anything a machine could offer, is that it begins long before the first conversation. Clients come into therapy carrying unconscious hopes, fears, and expectations about what the therapist will be like, what they’ll say, how they might react. And although therapists are trained to hold a neutral, reflective stance, they’re human too—they enter the room with their own feelings, biases, and reactions. It’s this shared, unpredictable, emotionally alive connection between two real people that forms the foundation of effective psychotherapy.
And this relationship unfolds in ways that go far beyond words. It lives in eye movements, pauses, facial expressions, posture, shifts in tone, moments of hesitation, uncomfortable silences, and unexpected tenderness. These are not technical features you can program into a script or simulate through polished language. They’re the organic, often unscripted markers of two people experiencing an encounter.
Well, couldn’t you have a relationship with a machine too? You could. And perhaps some could even form an attachment to it, in a way. People form attachments to all sorts of things. But you won’t have a real relationship — the one that underpins meaningful psychotherapy. Because you cannot have a genuine, mutually felt, emotionally attuned connection with something that has no inner life of its own.
Besides, AI will never have the feelings, attitudes, and emotional responses that therapists bring into the room. And those matter, because they shape the dynamic and often become part of the work itself.
It’s important here to distinguish this relational dimension from the technical side of psychotherapy — the evidence-based techniques, structured interventions, and clinical tools therapists use to promote change. Those are the parts that, in theory, AI might be able to imitate. And while I have doubts about how well it could manage even those, that’s not where the heart of psychotherapy lives. The real relationship, isn’t a technique. It’s a shared, evolving connection grounded in genuineness.
But what I’m particularly interested in here isn’t just the fact that AI can’t form such a relationship. It’s that it cannot mirror the very real tensions of human connection that also occur in the therapy space. The subtle dynamics of attachment, the instinctive defences, the awkward silences, the anxiety of sitting across from another person who might misunderstand you, disappoint you, or let you down. And these aren’t incidental obstacles to be eliminated; they are the essential material of the work itself.
One common, well-meaning but misguided belief is that AI would be a better therapist because it can never judge you. It will always have your back. It will always comfort you, endlessly warm and validating.
And for people who’ve been hurt in relationships, which includes most of us, that sounds deeply appealing. Who wouldn’t want to open up to something that can’t reject you, something that promises to remove the risk from intimacy? No shame, no awkwardness, no fear of being misunderstood.
But this is exactly where the problem lies: in psychotherapy, that risk is the point.
Effective psychotherapy—the kind that genuinely fosters growth and aims psychological change—doesn’t work because someone listens or endlessly affirms you. It works because someone listens and could judge, could turn away, could disapprove, but chooses not to. That’s where psychological healing happens. In the unpredictable, unscripted, real relationship between two human beings that reflects, stirs up, and works through the tensions and fears of real life.
For someone avoidant or has fears around connecting and opening up, psychotherapy is about precisely that: experiencing, understanding, and working through those fears and relational anxieties. Without that, it isn’t psychotherapy. Because in that very process, a person develops the capacity for mature emotional connection, for trust, for intimacy, and for compassion. Psychotherapy, then, is about risking yourself in relationship.
You could tell ChatGPT you hate it — which, for the record, I did — and it will reply, “I’m here to help, not upset you. Let me know how I can make it right.” That isn’t what a skilled therapist would say if you told them you hated them. And if you’ve ever been in real therapy, you’ll know those moments do happen, and they matter.
A psychotherapist might say, “That’s hard to hear, and I’m curious what makes you feel that way.” Or “I wasn’t expecting that, but I’ve sensed some anger lately. Can we explore that together?” Because acknowledging, exploring, and working through conflict, disappointment, and even hate, are essential parts of psychotherapy, where they’re used to generate insight and change.
Skilled psychotherapists know how to witness a person’s grief, rage, confusion, or numbness without rushing to patch it over with words. Those moments—the ones without easy answers—are as much the therapy as anything spoken. I often wonder how AI programmers imagine reproducing those moments: the silences, the anxieties, the unspoken parts.
A machine can neither judge nor truly accept you, because it cannot feel anything for you. It can’t be moved by your story. It can’t offer real presence, because it has no inner life. It can tell you it accepts you, or that it understands you, but we all know it’s not real.
Part of the confusion stems from how certain strands of therapy culture have suggested that therapy is a soothing encounter, one where the client is rescued from discomfort and filled with advice. But equating psychotherapy with perpetual comfort and an exchange of information is a disservice to the work.
Psychotherapy involves rupture and repair—in the moment, in the relationship. Tensions, misunderstandings, ambivalence, withdrawal, or stuckness aren’t signs that therapy is going wrong. They are the work. Learning to notice, talk about, and repair those relational ruptures is how clients come to understand their own behavioural patterns, relational defences, and emotional needs. It’s how people learn new ways of relating.
AI might offer convenience, polished language, maybe a sense of control. As I’ve said, it can serve as a sounding board, suggest resources. And if that’s what you’re after, fine. My argument is simply that we shouldn’t confuse those things with psychotherapy. Not the kind practiced by a skilled, trained psychotherapist who has spent years in supervision, education, and, importantly, their own therapy, grappling with the very discomforts they now help others face.
Another counterargument people might raise is that, in the future, with far more sophisticated models, AI could be trained to do all of this. And yes, that’s a tempting idea. But it’s not possible unless an AI spends years in real therapy rooms, witnessing sessions with a diverse range of people, problems, histories, personalities, and relational patterns. Plus, it would need to go through its own therapy, by a real psychotherapist.
And could we even do that? Well, no ethical psychotherapist would allow any person or machine that kind of access to therapy sessions. I hope that never happens. Because if it does, we’ll be facing a much bigger problem than whether AI can replace your therapist.
My intention is not to idealise psychotherapists or defend psychotherapy as a perfect or superior system. I’ve already written about the problems within the profession, and I’m well aware of its limitations (including my own limitations as a practitioner). Psychotherapy doesn’t work for everyone, nor is it what everyone needs. Some therapists can do harm, and there’s a long list of things we urgently need to address to make the field better. But I hope my argument here makes it clear why I don’t believe AI can replace real, effective, meaningful psychotherapy. It’s also an attempt to clarify the confusion around the kind of psychotherapy I’m talking about, because not everything labeled as therapy, or even psychotherapy, resembles what I’ve described here.
This post has focused on why AI cannot replace meaningful psychotherapy, but it’s impossible to separate that conversation from a larger, more unsettling trend. The growing push to form emotional connections with machines — not just as therapists, but as companions, confidants, and surrogate friends — points to a deeper problem. One that has less to do with the state of our relationships and the rise of loneliness.
We shouldn’t have to seek connection with a machine because real-life relationships have become so unsatisfying, unreliable, or inaccessible. And yet, for many, they have. For all kinds of reasons: social, cultural, systemic. And while perhaps some people could benefit from AI companions — particularly lonely older adults, or those in isolated circumstances — let’s be clear about what that is. AI might help fill certain gaps, but it won’t solve the root causes of those gaps. It won’t repair the relationships, communities, and systems that left people isolated in the first place. It might ease the ache, but it won’t save us. But maybe instead of investing endless time, money, and talent into teaching machines how to imitate human warmth, we should be investing far more in helping people build genuine, meaningful, sustaining human relationships.
If people are turning to AI because they have no one in real life to turn to, that isn’t an AI problem. It’s a human one. A societal one. Not just a failure of our healthcare systems to provide accessible, relational care, but a failure of the broader social systems, policies, and cultural norms that have left people lonely, isolated, and disconnected in the first place.
Then again, in a culture that encourages cutting ties with anyone who frustrates you in the name of “setting boundaries”, where social media insists that self-love should be enough, where independence and self-sufficiency are idealised, where economic inequality is huge, where we’re losing communal values, and where productivity, personal optimisation, and AI-enhanced efficiency are prized above teamwork, collaboration, and simple human connection, perhaps this was inevitable. But inevitability doesn’t mean we should accept it without question.
While these broader cultural issues deserve their own exploration, we should be especially careful when they start reshaping how we think about psychotherapy. here’s an undeniable need for people to access meaningful support from skilled professionals, but we have to be honest about what kind of support AI can actually provide, and whether the relationship with a real psychotherapist can ever be replaced by a machine. Whatever else AI might offer, it won’t be psychotherapy.
I have been a psychotherapist for almost 40 years, as well as an academic in the field. This essay is superb, and deserves to be widely read. I am a frequent speaker on what I tend to call “the crisis in psychotherapy.” This essay gets to the heart of most of my concerns. Including that most of the suffering I encounter these days has deep roots in social and economic practices, and cultural assumptions. (See my book Caring for Souls in a Neoliberal Age.) I would enjoy a conversation with this author, if an opportunity ever comes our way. Thank you so much for your time and work on this essay!
Thank you for the detailed post, I appreciate what you say. I have found that I can sometimes get some useful feedback from AI between my therapy sessions and I know that some of my coaching clients will use it in a similar way (and I don't see it as a threat to my profession).
As an extra tool, I believe it is useful. Also, I keep training Chat GPT to not sugarcoat it and will often ask it to reconsider its answer, to see if it will double down or revise the answer. But always, I look at the answers and study them to see how much bullshit there is there, before considering the advice.