Every day in my clinic, I see at least one patient who has asked an AI chatbot or Dr Google what to do about a health problem.
A sore knee, a cough, an odd mark on their skin. Increasingly, they are asking how to cope with mental health issues such as anxiety and depression.
Sometimes, the advice they receive is harmless. Rest, ice, compression. Stay hydrated, take some over-the-counter medication. Monitor changes.
But sometimes it is not.
Avoid going outside and limit personal interactions completely. That is what ChatGPT told a patient of mine with anxiety.
They did not leave their house for almost three months, stopped seeing friends and family, and did not even speak to anyone on the phone.
Eventually, they came to see me, but only because I followed up on a missed appointment, and not before their anxiety had escalated to a point of paralysis, turning what might have been a manageable condition into an entrenched crisis that required months of intensive support.
Cases like this are becoming far too common.
AI can be a helpful librarian, but a dangerous doctor
The appeal is obvious. Information is free, instant, and available 24/7. But unlike clinicians, these tools are not trained to treat real people, they are designed to provide plausible answers and to agree with us.
Long waiting lists are escalating the problem. Eight times as many people are still waiting for mental health treatment after 18 months compared with physical healthcare¹.
The wait is completely unacceptable, of course patients are going to self-treat. Many feel they have no other choice.
Yet the NHS has no system-level response to this reality.
Approved chatbots can play a role in education, triage, and support, but they are not treatment. So what are patients turning to? Unregulated tools are understandably filling the gap.
Tragic cases, such as Adam Raine, who died by suicide after ChatGPT’s “months of encouragement”, highlight the deadly consequences of following advice from unqualified, unregulated chatbots.
Like the example above, there are doubtless thousands more instances of patients following chatbot advice. Frankly, it is dangerous.
I am also concerned that the launch of ChatGPT Health and similar tools will give the illusion of enhanced safety to people desperately seeking help. In my view, lengthy disclaimers will not prevent harm, but will instead provide legal cover.
The system that needs the most reform is being abandoned
We are witnessing a failure of commissioning and digital governance in real time.
As ICBs restructure and neighbourhood care models evolve, we are losing people who have built up years of experience in governing digital health. Digital leadership roles are being merged, downgraded, or eliminated, just as AI and mental health innovation demand stronger oversight.
At precisely the moment when the greatest advancements are taking place, the very system that needs the most reform is being abandoned.
I feel that on the ground. I do not know what is going on. So what am I meant to tell my patients? What can I offer them? They, and I, have been left to navigate a grey area where harmful, unregulated advice thrives.
I understand the challenge: traditional regulatory models are built around approving a fixed device, with every change requiring a new round of approvals.
Software, however, evolves daily; repeating approval cycles each time is simply not practical.
Digital technology does offer opportunities for safe, effective mental health support
While unregulated chatbots can be harmful, digital technology also offers opportunities for safe, effective interventions.
Ironically, it was a patient who first brought at-home tDCS to my attention. They had been researching online and found brain stimulation headsets for depression, some safe, some untested.
When I looked into it, I found a CE-marked, evidence-backed device with real-world outcomes. It was a treatment I could offer to patients waiting for care, or to those reluctant to take antidepressants.
It showed me what is possible if we try. Clinically validated, accessible interventions can empower patients while preventing harm from unregulated alternatives. But we are just one GP practice.
Scaling these tools across the NHS is painfully slow. NICE approvals, while improving, still take years, and in the meantime, risk continues to grow as patients seek their own solutions.
To truly protect patients, the NHS must embed safe, regulated digital interventions into care pathways at a system level.
Approvals should be agile, oversight continuous, and real-world evidence collected, much like CQC inspections for healthcare providers. Patients deserve clear guidance about what works and what is safe.
Some risk is inevitable, but doing nothing is already causing harm. Patients are self-prescribing because the system is not keeping up. It is time for the NHS to meet the digital health challenge head-on, offering timely, safe, and effective treatment before this becomes the default.