As generative AI has erupted into the cultural mainstream, it has inspired a huge amount of excitement, and sometimes even awe, across all business sectors and parts of society in a way that very few technologies achieve.
At the same time, however, it has given rise to real and valid anxieties among policymakers, professional bodies, and organisational leaders about what its full impact might turn out to be.
While this new generation of AI tools comes, of course, on the back of years or decades of research, development, and innovation, the suddenness with which it became available to all might make this feel like a real cat-out-of-the-bag moment: guidance and regulations now lag significantly behind the technologies they are there to oversee, and there’s no clear, easy path to reasserting that oversight.
For any industry, and especially those with sensitive information environments like healthcare, there are questions that clearly need careful consideration.
How reliable, available, and consistent are these tools? What does a healthy working relationship with them look like? How will they change the professional practices of people using them? Will their continuing evolution lead to further surprises down the road?
The administrative burden
In a mindset of safety and responsibility, it’s not surprising to see moves like that of the Australian Medical Association in July, which recommended new controls and transparency rules after doctors in one healthcare network were told to stop using ChatGPT to write medical records.
As these organisations play catchup to the new technological possibilities of generative AI, they will also be playing a big role in defining how to make it a safe, productive force for good in society.
At the same time, however, there is something instructive about the case of those doctors’ usage of ChatGPT, in that the technology is already, as it is in all parts of society, an active part of the system’s working processes.
This should give organisations pause over any temptation to react simply against the use of AI: workers, where they need to, will already be turning to it.
With ongoing waves of strike action further interrupting services as professional bodies seek a more-sustainable resolution to current conditions, any route which enables healthcare workers to do more with the capacity they have must be seized
And the need to do so is, in healthcare, particularly acute.
Research from Nuance last year, surveying medical professionals in NHS England hospital trusts, found that the hours spent working on clinical documentation had risen by over 25% since 2015, to an average of 13.5 hours per week.
Another area of concern, revealed by this study, is that much of this documentation – an average of 3.2 hours – is completed outside of working hours.
Such alarming statistics come in the context of a sector which is laden with administrative processes, which too often remain manual, paper based, time consuming, and error prone.
And it’s a workflow which adds additional stress to an already-intense job, but which also has secondary knock-on effects on the quality of patient care, waiting times for services, and accuracy in the delivery of services.
With ongoing waves of strike action further interrupting services as professional bodies seek a more-sustainable resolution to current conditions, any route which enables healthcare workers to do more with the capacity they have must be seized.
Embracing AI in healthcare
The way forward, then, is not to push back against the introduction of AI into medical workflows in a blanket fashion, but to ensure that tools and frameworks are in place to ensure that, when doctors, nurses, and allied health professionals do reach for AI, the easiest for them to do so is through tools which have been designed for and integrated with the broader healthcare system.
There are already significant success stories of NHS trusts benefiting from the introduction of AI-powered solutions such as voice recognition.
While this new generation of AI tools comes on the back of years or decades of research, development, and innovation, the suddenness with which it became available to all might make this feel like a real cat-out-of-the-bag moment
Frimley Health NHS Trust, for instance, moved from a process which relied on transcription services and handwritten reports to a speech-enabled Electronic Patient Record system.
Now, rather than going through multi-stage processes of scanning and retyping to maintain clear communications between healthcare staff and patients, the trust uses a cloud-based speech recognition solution which allows staff to dictate notes and intelligently converts them into structured documents which meet the requirements of the Professional Record Standards Body.
In the future, more-advanced solutions will combine the rapid enhancements taking place across conversational, ambient, and generative AI – including ambient AI to capture clinical notes in a range of healthcare settings, generative tools to automatically create clinical notes, and use of natural language interfaces which allow clinicians to easily identify pertinent information in patient records.
The result will be to give hard-working healthcare professionals vital extra hours in their day to focus on their patients and deliver life-changing outcomes.
Working in an environment where information accuracy is critical, anxiety around introducing new technology to manage that need is understandable.
Those Australian GPs trying to refocus their workloads back on their patients, however, are not wrong.
Healthcare just needs the right routes to use this powerful new technology.