AI with an ethical conscience

Published: 14-Dec-2021

Jonathan Abraham, chief executive and co-founder of Healum, explains how the sector can harness the wisdom of healthcare professionals to deliver intelligent care and support planning for all patients

At Healum we have always believed that the access to pro-active, personalised healthcare should be a fundamental right for every person, whatever their cultural background, location, or means.

The NHS Long-Term Plan’s ambition is to make personalised care and support planning ‘business as usual’ for 2.5 million people with long-term health conditions.

NHS England’s definition of personalised care is people having choice and control over the way their care is planned and delivered, based on what matters to them and their individual strengths and needs.

Pro-active, personalised care is all about enabling people to understand the set of health choices that are available to them, and empowering them to make those choices.

This is true whether we are talking about medication, medical services, community-based services, or eating healthy food and leading a healthy lifestyle.

Addressing inequalities

In the UK there are growing health inequalities in the uptake of personalised care and support planning.

The Health Inequalities and Personalised Care Report highlighted a widening gap in this area among people of white ethnicity and other ethnic groups.

Factors such as income, housing, environment, transport, education and work impact the ability and motivation of people to make informed choices about their care and to manage their health.

Pro-active, personalised care is all about enabling people to understand the set of health choices that are available to them, and empowering them to make those choices

It’s hard to think about eating healthily if one is suffering from mental ill health because of poor employment or housing.

Our focus is to make it easier for people living with long-term conditions to manage their health and we do this by improving access to the daily support they need to make healthy choices and to plan their care.

This is part of a shared decision-making process with their clinicians.

And our personalised care and support planning software and connected patient-facing apps enable healthcare professionals to provide patients with more help and support at the moments that matter.

Clinicians clearly make a series of judgements about what the optimal set of medication, advice, educational content, community services, goals, actions, and resources are available to their patient.

And the wisdom behind their judgements is based on a career full of empirical observations; from treating other patients, shared learning from peers, and the evidence-based practices that they have adopted.

But the challenge is that there are a lot of medical and self-care options available and each patient is completely different and healthcare professionals don’t have the time to assess all of the options relevant to the patient in front of them.

The role of AI

Therefore there is an important role for machine learning in assisting healthcare professionals to create personalised plans of care and support in the future.

In 2018 we were inspired by the Academy of Medical Sciences’ report which called for AI-based research into the strategies needed to maximise the benefits of treatment among patients with multimorbidity.

There is an important role for machine learning in assisting healthcare professionals to create personalised plans of care and support in the future

It explored whether machine learning tools could be developed to assist healthcare professionals to deliver comprehensive, integrated care to these patients.

It also outlined the need for patient and carer priorities to be better captured and incorporated into care plans for patients.

With the support of a research and development grant from Innovate UK’s Digital Health Technology Catalyst, we set about developing a system to do just that.

It would enable healthcare professionals to determine the optimal set of medical, and non-medical, choices which could then be assembled into a personalised plan of care and support, regardless of an individual’s race, gender, medical history, DNA, and socioeconomic circumstances.

We wanted to make it quick and efficient for healthcare professionals to access a set of recommendations for the patient sitting in front of them during a consultation.

And, despite its complexity, machine learning models offered us the opportunity to do this.

It meant we could display the recommendations in a probabilistic system that ranks them, thus making care and support planning quicker, simpler, and more relevant, but, more importantly, in a way that was ethical and effective.

Creating an effective and ethical system

We learned early on that providing an intelligent system for clinicians in creating care and support plans will only work if the design and delivery adheres to the principles of trust, consent, diversity, efficacy, and safety.

We began by asking healthcare professionals which sources of information they trusted most and who they learn from when recommending care and support options for patients with long-term conditions.

Of the 100-plus healthcare professionals we spoke to, the biggest source of trusted intelligence came from the wisdom of their peers and their patients.

We learned early on that providing an intelligent system for clinicians in creating care and support plans will only work if the design and delivery adheres to the principles of trust, consent, diversity, efficacy, and safety

When we first started our R&D work, there was no effective way to crowd source a set of second opinions from clinical peers for a given set of patient characteristics and there was also no effective way to analyse, triangulate, and present a set of optimal recommendations.

For us this led to a very-simple design concept.

What if we could provide healthcare professionals with a set of trusted care and support plan recommendations based on the interventions and outcomes that their clinical peers had observed when treating similar patients?

This key concept underpinned our R&D work in using crowd-sourced peer recommendations to determine the optimal set of medical and non-medical choices for patients with type 2 diabetes.

Trust in the recommendations presented is linked to trust in the wisdom of other clinical peers who are using the software – a true live learning environment.

The challenges in overcoming algorithmic bias

Algorithmic bias is an issue when using machine learning techniques for anything relating to patient care.

Overcoming this algorithmic bias is paramount if we are ever going to use machine learning to present the optimal set of health choices for any patient.

Although we built our machine learning models to incorporate ethnicity and socioeconomic background data, we faced significant challenges in training those models on appropriate datasets.

Firstly, live-learning data would not be large enough to train for ethnicity, income, or region, so we had to find historical datasets to train and validate our machine learning models.

Overcoming algorithmic bias is paramount if we are ever going to use machine learning to present the optimal set of health choices for any patient

Secondly, the coding of these datasets is inconsistent and limited in its scope.

For example, most historical research databases do not enable us to break down people of South Asian backgrounds into Indian, Pakistani, and Bangladeshi.

Thirdly there is an issue of consent and governance around the ethical use of research datasets for AI development.

We found that some private research companies were operating in a grey area and selling anonymised extracted patient information.

That was not what patients and healthcare professionals told us they wanted and went against our values.

Instead, we chose to only work with research databases that have rigorous ethical standards, such as the Royal College of General Practitioners’ Research Surveillance Environment, governed by the Primary Health Sciences department at Oxford University.

Their data is anonymised and can only be used under a strict protocol that adheres to the standards of their Scientific Ethics Committee.

Our hope is that the learnings that we generate from this research over the next few years will provide healthcare professionals with a set of effective recommendations to include in care and support plans that overcome issues of algorithmic bias.

The future

NHSX’s recent draft AI Strategy outlined that we need to all play our part in ensuring that openness, fairness, safety, and efficacy are part of the AI technologies that we bring to market.

Our approach is to ensure that the wisdom of healthcare professionals plays a part in training any machine learning algorithm.

We believe it is immensely important to provide personalised care and support planning to people from all diverse communities that is free from algorithmic bias.

We need to include patients in our approach to AI research in order to understand how to handle consent and communication of the benefits and risk.

Our hope is that the learnings that we generate from this research over the next few years will provide healthcare professionals with a set of effective recommendations to include in care and support plans that overcome issues of algorithmic bias

And this can be achieved by rigorously following the NICE Evidence Standards Framework for digital interventions, NHSX ethical codes of practice for the development of AI technologies, and the recently-published Transparency Standards for Algorithms.

Going into 2022, Healum will be opening up its live learning network to healthcare professional stakeholders across primary, secondary, community, and social settings.

We want to incorporate the wisdom of more healthcare professionals and patients in a safe and ethical way so that we can improve the quality and access to personalised care, and support choices for more people with long-term conditions.

You may also like