The Digital Dilemma
ID: 281225
Why AI Cannot Replace Human Connection in Mental Health Care
A poll released by Mental Health UK has laid bare the growing desperation faced by those struggling with their mental wellbeing in Britain.
The survey, conducted in November 2025 among 2,000 individuals, revealed that 37 per cent had turned to an AI chatbot for mental health support. The reasons behind this shift are telling: around four in ten cited ease of access, while nearly a quarter pointed to lengthy NHS waiting times.
These findings should concern us all. When people resort to algorithms because they cannot access proper care, something has gone badly wrong.
The appeal of AI is understandable. Chatbots are available at any hour, ask no awkward questions, and carry none of the stigma that still surrounds seeking help. For someone in distress at three in the morning, the promise of an instant digital ear can feel like a lifeline. Yet we must ask ourselves whether these technological solutions are genuinely helping or merely papering over cracks in our mental health system.
Malcolm Hanson, Clinical Director of PTSD Resolution (Charity No. 1202649), a charity providing therapy for veterans, reservists and their families, expressed concern about the trend.
"We understand why people are drawn to AI—they are struggling, and they cannot get help quickly enough through conventional routes.
But a human connection is intrinsic to healing from psychological trauma because we are social creatures. The language models used by AI detect cues and respond but they do not understand the wider context in which those cues reside and cannot reflect back what they detect in the same way that a human does when they hear another person.
This includes body language, picking up on subtle vocal cues, or adapting in real time to what a person genuinely needs. These are things only another human being can provide."
The numbers highlight a stark reality. More than 1.7 million people sit on mental health waiting lists in England alone. For many, an AI chatbot feels like the only option available. But accessibility must not come at the cost of effectiveness, particularly for those dealing with serious conditions.
PTSD Resolution offers an alternative model worth examining. The charity has treated 4,500 clients through its network of 200 therapists, delivering results that demonstrate what human-led therapy can achieve: an 82 per cent completion rate, 79 per cent reliable improvement, and 66 per cent recovery for PTSD cases. Crucially, this happens at just £940 per therapy course and typically within days of first contact—not months.
"What troubles me most," continued Hanson, "is that AI chatbots lack clinical oversight and cannot properly handle crisis situations. A trained therapist recognises when someone is masking their true state. A chatbot simply cannot do that."
The poll's findings should serve as a wake-up call. While AI may have a supplementary role in mental health provision—perhaps for basic education or between-session support—it cannot replace the relationship between therapist and client where genuine healing occurs.
Rather than accepting digital plasters for a wounded system, we ought to be investing in accessible, rapid human therapy. Organisations already exist that prove it can be done. The choice facing individuals in distress should never be between waiting indefinitely or talking to a machine. They deserve better, and workable solutions are already within reach.
Why AI Chatbots Fall Short for Mental Health Support
- AI chatbots lack access to your mental health history
- They cannot fully grasp the nuances of a serious mental health situation
- They can provide completely wrong advice, particularly when conversations go off-script
- Chatbots have a built-in tendency to agree with users rather than challenge them
- They lack the sophistication to identify and address problematic thought patterns
- During episodes of psychosis, when users are at heightened risk of self-harm and suicide, chatbots cannot recognise the warning signs
- They may inadvertently validate harmful beliefs, creating potentially dangerous situations
- No AI chatbot has UK regulatory approval to treat mental health conditions
- They cannot read body language, tone of voice, or facial expressions the way that a human can
- They offer no clinical oversight or professional accountability