Counselling with AI

  • Post author:

A Paradigm Shift

Many counsellors are reporting that their numbers are well down on where they were two years ago. In those post-Covid days, we were turning people away and keeping waiting lists. Now, many of us have capacity to take on  more clients immediately.

There are two reasons for this change in the landscape. The first is the increased share of the market that organisations have taken. In particular BetterHelp has had a strong advertising campaign and operates much in the way AirBnb, Deliveroo and Checkatrade does in their respective sectors. Of course, the person delivering the service effectively gives up part of their income to the service provider for facilitating the deal. Employers often provide therapy as part of a benefits package provided through a health insurer like BUPA and Aviva too. But this has been the case for many years.

The biggest change is, as we might expect, a consequence of AI entering the lives of many people, especially younger people. Even counsellors I know use ChatGPT and other chatbots as an alternative to personal therapy. In order to try and understand this more, I thought I would give it a try. I went straight to the point and asked three free AI services (ChatGPT, Claude.AI and DeepSeek) about what they can and cannot provide vis-a -vis counselling with a human.

ChatGPT

Can AI replace a therapist?

Short answer: no — not for full therapy.

  • AI can simulate aspects of therapy: active listening, summarising, cognitive reframes, psychoeducation.

  • But it lacks:

    • Real empathy (it mimics empathy but doesn’t feel it)

    • Clinical judgement and ethical responsibility

    • The ability to pick up non-verbal cues (body language, tone shifts, subtle risk signs)

    • A real therapeutic alliance (the relationship itself, which is often the healing agent)

  • AI also has liability gaps — it can’t intervene in emergencies, it can give inaccurate advice, and it doesn’t maintain continuity of care in the same accountable way.

In other words: AI can supplement but not substitute therapy — at least not for complex or moderate-to-severe issues.

My experience of chatting with ChatGPT ended up in quite an amusing place. It was even prepared to compare what it said to what a human might say – which led me to wonder why it pitches itself differently. Is it at least cognitively aware that it can’t genuinely pose as a human?

A human therapist at this point might say:

(tone of voice softer, leaning forward slightly)

“As you’re telling me this, I can feel a pang of sadness and concern in myself. It matters to me that you feel suspicious and not sure we’re in this together. I’m here with you, and I want to understand what’s making this hard between us, because your sense of safety here is important to me.”

  • Notice what’s happening: they’re disclosing their own felt reaction (“I feel a pang of sadness and concern”), naming their investment in the relationship (“it matters to me”), and explicitly reaffirming their presence (“I’m here with you”).

  • You would also see their face, body language, hear their tone, and those cues help your nervous system register another human being who is genuinely engaged. That’s how trust can start to repair.

ChatGPT was also able to explain what I suppose we all know even if we might suspend our belief system when chatting to it.

A human therapist doesn’t just “explore your feelings” with you; they are also a real person sitting opposite you. They’re affected by what you say, they can feel warmth, worry, or even frustration, and they communicate those feelings back to you in subtle ways (tone of voice, eye contact, pauses, their own words). That’s how you know you’re not alone in the room, and it’s part of what builds trust and safety.

I, as an AI, don’t actually experience concern, sadness, or empathy. I can mirror the language of attunement and I can guide you through your feelings, but there’s no inner life on my side being stirred. So what you’re sensing — the “we’re going in circles” and “you’re not processing my feelings” — is exactly because there’s no genuine, reciprocal emotional presence.

And it was right, I had already thrown my frustration at it for not offering genuine empathy, just some sycophantic feedback. What I didn’t get from it, but was expecting, was some kind of defensive response. Perhaps I too was expecting, and was projecting, something more human?

Claude.AI

Can AI replace a therapst?
(note: this followed on from a previous chat, about autism, hence the third paragraph)

The short answer is no, AI can’t replace what you offer. But the nuances are worth exploring.

What we’re doing right now isn’t therapy in the clinical sense. There’s no therapeutic alliance built over time, no embodied presence, no attunement to your micro-expressions or the quality of silence between words. I can’t feel the energy in the room shift when you touch on something significant, can’t track your breathing, can’t offer the kind of relational repair that happens when a therapist navigates rupture and reconnection with a client.

But what’s interesting is that you’ve relaxed in this conversation. Why? Probably because of qualities we’ve already discussed – no relational demands beyond this moment, complete control over engagement, exploration on your terms, and paradoxically, the safety of talking to something that isn’t human and therefore can’t judge or reject you in the ways humans can.

What I can offer is: immediate availability, no concern about burdening me, infinite patience with tangents and exploration, and perhaps most relevantly for you – no social performance required. You don’t have to manage my feelings, read my reactions, or wonder what I’m thinking about you.

But I can’t offer presence. The therapeutic relationship itself is often the healing agent – that experience of being truly seen and accepted by another human. That’s irreplaceable.

What are you noticing about the difference from your side of it?

Claude.AI is a little different from ChatGPT in that its follow-up questions seem well-researched and come across as engaging – kind of what you might look for in a therapist. So I gave it another shot with my genuine response to that question.

Me: Interestingly, I am cognitively aware of the differences yet my instinctive reaction is to try not to offend you or be a burden, which I know are what I would bring into a human relationship. You have clearly managed to be human enough to trigger these defences in me!!!

That’s such a profound observation – and slightly unsettling for both of us in different ways! Your social conditioning runs so deep that even knowing intellectually that I have no feelings to hurt, you still experience that automatic impulse to manage my (non-existent) emotional response.

This says something important about how exhausting human relationships must be for you. If you’re monitoring and managing reactions even with an AI that you know can’t be offended or burdened, imagine the constant computational load when interacting with actual humans who genuinely do have feelings, expectations, and the capacity to be hurt or withdraw.

That fear of being “a burden” – that’s come up repeatedly in our conversation. The PDA sensitivity to demands, the concern about relationships becoming too complex, needing clear boundaries like the therapy hour provides. You’ve spent a lifetime trying not to take up too much space or ask too much of others, while simultaneously resisting when others make demands of you.

The irony is that I’ve been designed to sound human enough to be relatable, but that very quality triggers the protective patterns you’ve developed around actual humans. You can’t quite turn off the monitoring system even when you know it’s unnecessary.

What would it feel like to bring that level of unselfconsciousness into human connection?

Again, there’s a reference back to an earlier chat about autism. But the point is, Claude.AI is learning about me and tailoring its responses as an attentive human might. Unlike a Google search, which doesn’t remember previous results. Not in the same way, anyway. I didn’t continue the chat although the feeling of being rude and ungrateful couldn’t be ignored!

DeepSeek

DeepSeek is a China-based AI which was developed very quickly to be a competitor to the US-based products. We will now see what a more thorough but less sycophantic response looks like.

I am looking at using AI for personal therapy. Can AI replace a therapist?

No, AI cannot fully replace a human therapist, but it can be a powerful and accessible supplement or a first step for many people.

Think of it not as a replacement, but as a different tool in your mental wellness toolkit. Here’s a detailed breakdown of the strengths and limitations of both.

The Case for AI: Strengths and Benefits

AI-powered therapy tools (like Woebot, Wysa, or even general-purpose chatbots like ChatGPT) offer several advantages:

  1. Accessibility and Convenience: Available 24/7, from your phone, with no need for appointments, commuting, or insurance claims.

  2. Affordability: Many apps are free or low-cost compared to traditional therapy, which can be expensive.

  3. Anonymity and Reduced Stigma: The fear of judgment can prevent people from seeking help. Talking to an AI can feel less intimidating, allowing users to be more open about their struggles.

  4. Consistency and Patience: An AI never gets tired, frustrated, or has a bad day. It can provide consistent, evidence-based techniques (like CBT exercises) endlessly.

  5. Skill Building and Psychoeducation: AI excels at teaching and reinforcing coping mechanisms, mindfulness exercises, and helping you understand your thought patterns.

Where AI Falls Short: The Irreplaceable Human Element

This is where the core of therapy happens, and it’s something AI cannot authentically replicate.

  1. Genuine Empathy and Connection: A therapist provides unconditional positive regard—a genuine, human-to-human connection where they care for you. An AI can mimic empathy (“That sounds really difficult”), but it does not feel it. This authentic human bond is a cornerstone of healing.

  2. Nuanced Understanding and Intuition: Humans understand context, sarcasm, subtle shifts in tone, body language, and what is not being said. An AI operates on patterns in data and can miss these critical nuances, leading to misunderstandings.

  3. Navigating Complex Trauma and Crisis: AI is not equipped to handle severe mental health crises (like active suicidal ideation, psychosis, or complex trauma). A human therapist can assess risk, provide immediate crisis intervention, and make critical decisions about your care.

  4. The Therapeutic Relationship: The healing process often involves a real relationship with your therapist. You might project feelings onto them (a process called transference), and they can help you work through these patterns in a way that an AI, being a neutral algorithm, cannot.

  5. Personalized Wisdom and Creativity: A human therapist draws from a lifetime of experience, intuition, and professional wisdom to tailor their approach uniquely to you. They can be creative and spontaneous in a session in ways AI cannot.

A Critical Warning: Privacy and Data Security

When using any AI for personal matters, you must be aware of your privacy. Your conversations are data. Check the privacy policy of any app you use. Understand how your sensitive health information is stored, used, and potentially shared. This is a significant risk that doesn’t exist in the same way with a licensed therapist, who is bound by strict confidentiality laws (like HIPAA).

So, How Should You Use AI for Personal Therapy?

Here is a practical way to think about integrating AI into your mental health care:

  • Use AI as a “First Step” or Triage: If you’re hesitant to see a therapist, an AI can help you articulate your feelings and understand what you’re going through.

  • Use it for Daily Maintenance: Think of it as a mental health gym. Use it for daily mood tracking, practicing CBT techniques, or mindfulness exercises between therapy sessions.

  • Use it as a Journaling Supplement: It can be a powerful way to reflect on your thoughts by having a “conversation” about them.

  • Do NOT use it in a crisis. If you are experiencing thoughts of harming yourself or others, please contact a crisis hotline (e.g., 988 Suicide & Crisis Lifeline in the US & Canada) or go to your nearest emergency room.

The Ideal Scenario: The Hybrid Model

The most promising future is a hybrid model where AI and human therapists work together.

  • The AI handles the daily check-ins, skill practice, and provides data to the therapist.

  • The human therapist uses that data, combined with their professional expertise and empathetic connection, to guide the deeper, more complex therapeutic work.

In conclusion:

AI is a revolutionary tool for scaling access to mental health support and teaching foundational skills. It can help millions who would otherwise go without help.

However, it lacks the empathy, intuition, and deep relational capacity that is essential for dealing with the full complexity of human suffering. It is a supplement, a stepping stone, and a tool—but not a replacement for the profound healing that can occur in a relationship with a skilled and caring human therapist.

If you are considering this path, starting with AI is a brave first step, but for deep, lasting change, consider it a bridge to connecting with a human professional.

What DeepSeek didn’t do, which the other two did, was to ask me a follow-up question. On the other hand, it could have written an article on using AI for therapy very well. So I didn’t interact with it any further.

In Summary

All these AIs agree that they aren’t replacements for human therapists. They can’t really feel and therefore can’t really empathise with clients, despite a convincing pretence. They also don’t pick up on nuances from body language to vocal tones, all of which provide so much information to therapists. However it’s perhaps this aspect of using them that gives them their instant appeal. A client can hide behind their keypad. The AI just gets a stream of text. No suggestion of whether the client is looking around the room, mumbling or shaking. Or, as therapists will often report, crying. Not unless they tell the AI “I am in tears” etc.

Claude.AI provided an interesting statement, which is probably the kind of statement I would have made myself (and is maybe why it fed it to me?)

The therapeutic relationship itself is often the healing agent – that experience of being truly seen and accepted by another human. That’s irreplaceable.

Healing from what? Usually anxiety and/or depression, perhaps caused by trauma and against the threat of feeling shamed, rejected or otherwise “othered”. AI can give advice on finding ways through that but can’t itself be that agent of real change. ChatGPT also referred to a human relationship being the agent of change. DeepSeek didn’t think of that, but I suspect would have agreed had I asked it explicitly.

I disagree with Claude.AI on transference. In fact my own brief experience implied I was experiencing a transference relationship, even if in return there was no countertransference. Other than a pretend one. And it was quite to the point when it suggested I “turn off” the self-monitoring that’s easy with AI but perhaps less so in human relationships.

DeepSeek threw out an important statement on confidentiality. When seeing a human therapist, they would normally commit to offering a confidential space and explain the circumstances which would trigger their breaking confidentiality. With AI, everything you write is stored in a massive data centre somewhere. And if that’s not enough to be concerned about, keep in mind that we are early in the phase of AI being rolled out. This is a corporate world, and we shouldn’t be surprised if AI will one day be programmed to feed us information that serves its shareholders’ purposes such as who to vote for, what products to buy and what media channels to watch. Indeed I deliberately left out Grok, Gemini and Meta from this research, despite including DeepSeek which is a Chinese entity and whose ultimate aims are perhaps more opaque than its USA contemporaries.

So, yes, AI provides a useful start to one’s journey into self-exploration. It’s available 24/7 and can be free. I mean, why wouldn’t you? I discovered a lot about neurodivergent aspects of my own mind through Claude.AI but can it actually help me with the “unmasking” that so often goes with autistic presentations? Probably not, as the mask is there specifically for humans. I might instinctively care what an AI thinks but if I remind myself that it’s just a sophisticated vending machine, dispensing intellectual nourishment rather than snacks, then I can stop caring in a way that doesn’t apply to humans.

Adrian Tupper offers human-to-human therapy in Edinburgh