VIEW

Chatbots and co-pilots: how Generative AI is changing healthcare

Healthcare is a hot topic when discussing the impact of GenAI. What might change for doctors? And what must happen before it enters widespread use?
Two doctors stand looking at a document in a busy corridor. Around them are the blurred figures of people moving past them quickly.
SARAH VLOOTHUIS HEADSHOT

Written by Sarah Vloothuis

Senior Manager External Communications

The conversation surrounding Generative AI continues unabated, but now the initial hype has somewhat worn off, our attention turns to the realistic long-term. That is, what it will mean for our day-to-day lives in every aspect. So, understandably, the role GenAI could play in healthcare is being carefully explored. In Edinburgh, the President of Canon Medical Research Europe Ltd (CMRE), Dr Ken Sutherland and his team are, of course, already deeply absorbed in the capabilities of Generative AI and actively examining and assessing its potential.

‘Potential’ is the key word here. Because when it comes to patient care, nothing can be left to chance. Generative AI has already created a step change in the way that many of us work. ChatGPT alone reports 100 million daily users and there is further evidence showing that almost 50% of healthcare professionals intend to adopt AI technologies in the future. Further, GenAI is being deployed into any number of task-related areas, such as code generation, product development and smart manufacturing. Because of this, Dr Alison O’Neil, a Principal Scientist in AI Research at Canon, sees the role of the AI Scientist changing significantly. “Up to now, supervised learning has been the predominant paradigm,” she says. Supervised learning in AI is where an algorithm is taught using expert-labelled examples (in the case of medicine, the experts are doctors). It learns from these examples and uses them as the basis for predictions or decisions when given new, similar inputs.

However, Generative AI models such as ChatGPT and GPT-4, are unsupervised, so do not use expert-labelled data. For some AI Scientists, this means that a huge chunk of the everyday work (“building from the ground up,” as Dr O’Neil calls it) of bringing an algorithm to life is no longer necessary, as the working model already exists. Instead, much work lies in interrogating the models, interacting with them and answering questions of safety and ethics. However, in the case of Dr O’Neil and her fellow AI Scientists, work will continue on supervised AI models, in order to have clear control of the data they use when developing medical imaging software.

A female doctor sits at a laptop taking notes on a clipboard. She has a stethoscope around her neck. Opposite her with his back to the camera is a patient.

This said, deploying Generative AI into patient-facing healthcare is somewhat trickier than other industries. The risks, of course, are simply much higher. Take the most well-known deployment of GenAI – chatbots – as an example. It’s reported that Google’s Med-PaLM 2 chatbot is already in testing with the Mayo Clinic, and Dr Sutherland sees great potential value in such technologies, but has concerns around risk and liability. “It’s potentially very useful as a means of providing healthcare where there's really very little available – in remote parts of the world or in areas of population density where there are not enough clinicians,” he says. “But even if it is as effective or more effective than a human, are we going to accept when they make an error? And if so, who is liable?”

A far less risky place for clinicians to use GenAI solutions is in tackling their heavy and time intensive administrative loads, what Dr Sutherland refers to as “high value admin”. In this area, GenAI has a substantial amount to offer. “When it comes to retrieving relevant data, interpreting it, writing up reports and making the right decisions for a patient, a lot of the information you need is already in digital format, often in the patient record,” explains Dr O’Neil. “And this is ideal input data and work for generative models that appear to be showing the sort of reasoning capability we didn't have before.”

In practical terms, this could be a scenario where clinicians have 100,000 patient records in their hospital and from these seek to invite a very specific group of patients to a clinical trial. A well-designed prompt to a GenAI tool could eliminate the need to trawl thousands of records, searching for the perfect cohort – even when the data is incomplete or imperfect. “Equally, we can start to generate output,” explains Dr Sutherland. “Report segments, for example, in very specific formats or just more free flowing paragraphs. This means that instead of having to sit down and dictate or type, a radiologist can just read the report and change it if it's wrong.” This, in a time where every minute counts, could save a phenomenal number of clinician hours.

"In my opinion, the biggest single challenge, when it comes to all AI systems, is ensuring that the data is appropriate. The potential to embed bias is a big problem for all AI systems."

An additional benefit to using a generative model for report generation is that they can quickly be translated into simplified terms. It can be challenging for anyone without a medical background to decipher the language used in these kinds of documents – even those in social care, who need to understand the ongoing home requirements of a patient. “Doctors are very, very good at this,” Dr Sutherland explains. “They will sit down and explain these reports and outcomes clearly to patients and caregivers. But it can take a long time to translate these reports from medical terminology into layperson terminology.” These types of more accessible reports could simply become part of the official medical record, he predicts, with further easy to read and digest content, such as lifestyle advice and treatment plans, added in for good measure.

He sees a future where every doctor enjoys the assistance of a “co-pilot”, that is a Generative AI assistant able to support their everyday practice in a multitude of ways, not just limited to report writing and data retrieval. In the UK, for example, doctors work to guidelines set by the National Institute for Health and Care Excellence. These guidelines offer best practice for diagnosis, treatment and management of various medical conditions and provide a framework for doctors as they take daily decisions around patient care. Of course, like any guidelines, they are updated and this requires doctors to be on top of the latest developments. In these sorts of circumstances, a GenAI tool which can recommend the latest guidance as it’s required would be invaluable.

“In theory, your co-pilot could be proactively interpreting as you work with patient information and may be able to suggest next steps based on the latest guidelines,” he says. “These models could have access to state-of-the-art technical papers or medical research papers you might not have. You could be describing a treatment plan and be contradicted by the AI, which says ‘Based on most recent medical research, are you aware that this treatment plan no longer meets latest recommendations for a patient with this condition?’ These kinds of proactive suggestions are potentially very, very powerful.” More powerful still is the ability to pull on data to show the historic effectiveness of treatments, but being able to drill down into age, weight, height, ethnicity and more. “Who else like me has had this condition? What's the best course of action for me based on that data? That's effectively precision medicine by another route.”

A female Muslim doctor sits at a laptop writing notes, she has a stethoscope around her neck.

There is, however, an elephant in the room when it comes to discussing Artificial Intelligence of any kind: bias. “For us to actually adopt a piece of technology, we have to be certain it's safe, effective and unbiased,” says Dr Sutherland. “We must be completely confident for ourselves, the regulators and, most importantly, users and patients.” Dr O’Neil agrees, but stresses that bias is, of course, fundamentally an issue of data and data is also an issue in and of itself. “The usual answer [to bias] is to be very careful about what data you train on,” she says. “Being aware of the potential biases of a model is probably the first step in countering them. But sourcing a balanced dataset is hard, and the reality is that patient data availability is a big challenge.” So, while using Generative AI tools to, say, translate documents between languages is relatively easy (after all, there is a world of openly available language data for them to work with), medicine presents a far trickier undertaking.

“The multimodal generative models, like GPT-4, are certainly not there yet for medical images,” she explains. “Medical images are a bigger challenge than standard photos you might find on the internet. They are much larger and there are far fewer of them available. So, it's a work in progress just to collect suitable training datasets, especially for volumetric imaging modalities such as MRI and CT scans.” Welcome improvements in data privacy practices now also mean that gaining access to the kind of clinical data required for research purposes is harder anyway. Rather than sharing anonymised patient records externally with AI Scientists like Dr O’Neil, in the future any work on data will need to be conducted within the environment of the data custodians. As time goes on, even this may be limited as decentralised data puts the onus entirely back on the patient to decide how it is – and is not – used.

Smartwatches and smartphone apps also feel like a natural fit for the large-scale collection of anonymised data on everything from heart rates to menstrual cycles. But they too come with an almost in-built level of bias. “I would assume that the average person using a wearable would certainly be wealthier than the world average, which immediately creates a bias in any data gained from them,” says Dr Sutherland. “To be able to use such data reliably means you must at least understand this bias and be able to compensate for it. Although that that feels like a bit of a minefield.”

Issues of specialist data aside, it's clear that Generative AI is already having a substantive impact on the world of healthcare. “Simply speaking, it’s helping doctors to do things faster, by automating existing workflows,” explains Dr O’Neil. “Fundamentally, it’s about trying to reduce the amount of time the clinician spends with EHRs (Electronic Health Records), reducing their administrative backlogs and increasing the amount of time they spend talking to their patients.” This is primarily what the work of CMRE is all about – creating tools that “turbo-charge” the skills of clinicians and give the best possible results to their patients. And while there’s still a long way to go, for both patients and doctors, the small benefits of using Generative AI in healthcare quickly become huge changes. “After all,” smiles Dr O’Neil. “Patients are now asking ChatGPT their medical questions. So, it's already happening.”

Sarah Vloothuis Senior Manager External Communications

Read more articles like this from Canon VIEW