Can Doctor AI predict cancer, save lives?

There’s mounting evidence that it can but ethical and logistical questions still cloud AI’s medical future.

Big Question
[Nataliia Shulga/Al Jazeera]

A patient waits anxiously in the doctor’s office. The specialist walks in to inform them that they have been diagnosed with cancer. But there is good news. It has been discovered at an early stage. They have excellent prospects for a full recovery.

An artificial intelligence tool had analysed the patient’s entire medical history for red flags. Noticing several early indicators, it concluded that the patient had a high risk of developing cancer. So, the patient was sent for imaging tests.

The images were analysed by another AI program and were classified as indicative of early-stage cancer. Yet another platform screened the patient’s pre-existing conditions and associated prescriptions to help the doctor avoid medication combinations that could interact adversely. And still another AI system helped streamline administrative paperwork and improve the efficiency of appointment scheduling with specialists.

Right now, this image of AI seamlessly integrated into every aspect of healthcare is largely science fiction. But a number of researchers and companies are hoping to turn this into reality within a few years.

The emergence of generative AI platforms, such as ChatGPT, has turbocharged a global debate over the future of human-machine relations. These programmes can process and generate language-based content, and are interactive and understandable in ways that are more intuitive than previous generations of AI. People have also turned to platforms like ChatGPT for therapy.

While generative AI has led to a plethora of headlines, many cogs in the machine of modern medicine are becoming more intelligent by embracing a different kind of AI – one that could fundamentally transform healthcare but has also thrown up a complex set of questions that could define the future of the sector.

Can AI really help doctors foretell diseases? Can it also help make treatment better? What are the rules of this game? And what are the risks?

The short answer: AI has shown promise in diagnosing, predicting and potentially even treating a range of medical conditions, say leading scientists and entrepreneurs driving the technology. But it is early days. There have been – and will be – stumbles. And key technical limitations as well as ethical concerns remain unaddressed.

An AI-based camera being used to image a patient with cerebral malaria [Business Wire/AP]
An AI-based camera being used to image a child with cerebral malaria [Business Wire/AP]

Not a new journey

Healthcare AI has been around longer than most might expect. In the 1970s, Stanford University first created an AI tool named MYCIN, which aimed to aid physicians in diagnosing and treating bacterial blood infections and meningitis. It used the available knowledge and ability of an expert in a certain domain as represented by if-then statements – functioning like an intelligent flowchart, where yes or no answers to the patient’s situation lead down a path to one among a set of predetermined responses.

Used for the limited purpose of asking patients for information and trying to diagnose the infection, MYCIN performed on par with bacterial disease experts. But this rules-based approach gave it little ability to learn.

The form and flexibility of healthcare AI have changed dramatically since MYCIN. There are now numerous types of AI being researched for various healthcare responsibilities. In the United States, from 2018 to 2019, the use of AI among life sciences organisations and healthcare providers more than doubled.

The pandemic has only accelerated that trend. Globally, 2021 saw investment in healthcare AI double over the previous year. Last year, the international medical AI market was valued at more than $4bn and is expected to grow by nearly a quarter annually over the next decade.

Much of the progress has been driven by machine learning, where AI aims to mimic the gradual methods by which human minds learn. Leading the show are artificial neural networks (ANNs) – with a multitude of nodes, connected like neurons and organised into layers. Each layer analyses information and performs operations before passing it forward to the next.

Ask a neural network to identify a tumour, for example, and the program might start by highlighting edges and gradients, helping “identify boundaries between the tumour and surrounding tissue,” says Nafiseh Ghaffar Nia, a PhD researcher at the University of Tennessee, who recently published an analysis of AI techniques in diagnosis and prediction.

As that information flows forward, subsequent layers would analyse features further in-depth, clocking the tumour’s irregular textures and growth patterns until the layers assemble all this information about complex tumour characteristics, like shape, size and arrangement, eventually diagnosing the growth as benign or malignant.

Because these ANNs can learn with less supervision, they have become a de rigueur approach for many medical applications, including cancer diagnosis, though many tools use a mishmash of AI techniques.

At the heart of it all is a clear set of medical goals that AI is being tested against, suggested Nigam Shah, the chief data scientist for Stanford Health Care. “Every AI gizmo that you look at will boil down to doing three things: classify, predict or recommend – in medical speak, diagnose, prognosticate or treat.”

This photo taken on June 15, 2023 shows a laboratory technician conducting artificial intelligence (AI)-based cervical cancer screening at a test facility in Wuhan, in China's central Hubei province. (Photo by STR / AFP) / China OUT / CHINA OUT
A laboratory technician conducting AI-based cervical cancer screening at a test facility in Wuhan, in China’s central Hubei province, on June 15, 2023 [AFP]

The promise

The standout advantage that AI offers in diagnosis is medical imaging – it is good at pattern recognition.

At the end of the day, said Sanjeev Agrawal, the president of the Silicon Valley healthcare predictive analytics company LeanTaaS, it can be trained on a volume of image data that is several orders of magnitude more than any one human will ever analyse.

And neural networks have had considerable practice with imagery. In 2012, the ImageNet Large Scale Visual Recognition Challenge – which evaluates algorithms for object detection – first saw a program correctly classify images better than a human observer.

Since then, AI has advanced to the point where it can tackle truly complex imaging problems. Agrawal points to Google AI platform DeepMind’s modelling of a human’s protein structure and folding as one of the highest accomplishments of such medical imaging tools. Modelling protein behaviour, as DeepMind has done, “is an imaging problem, but a three-dimensional imaging problem that human beings could never have figured out on their own”, said Agrawal.

Aside from imagery, AI can draw on other data recorded in a patient’s electronic health record to draw conclusions on how likely someone is to have a given disease.

Samira Abbasgholizadeh-Rahimi, a professor at McGill University, recently conducted a review of AI applications in primary healthcare. She told Al Jazeera that she has found AI to be particularly promising for diagnosing cardiovascular diseases, ocular conditions, diabetes, cancer, orthopaedic conditions and infectious diseases.

Predictive AIs are even more diverse in application. Researchers have found that AI could be leveraged to predict the likelihood of many conditions – such as Type-2 diabetes, heart disease, Alzheimer’s and kidney disease –  based on lifestyle, medical records, genetic factors and more.

And the past few months have seen significant breakthroughs in the use of AI to identify cancer risks. It can beat standard models in predicting breast cancer, research published in June showed. In January, researchers at the Massachusetts Institute of Technology unveiled an AI-based lung cancer risk-assessment machine. And in May, Harvard scientists showed that an AI tool could identify people with the highest risk of pancreatic cancer up to three years before an actual diagnosis.

That’s not all. In March, scientists at the University of British Columbia demonstrated that an AI program could predict cancer survival rates better than previous tools.

Likewise, AI can predict the potential toxicity and effects of various medications, helping streamline the process of testing them and bringing them to market.

But machine learning tools can also get it badly wrong.

Claudia da Costa Leite (L), professor of the Department of Radiology and Oncology, and the vice-director of the Radiology Institute of the Clinics Hospital, of the Faculty of Medicine of the University of Sao Paulo (InRad), Marcio Sawamura, work, in Sao Paulo, Brazil, on July 29, 2020, amid the new coronavirus pandemic. - A platform called RadVid-19 that identifies lung injuries through artificial intelligence is helping Brazilian doctors detect and diagnose the new coronavirus, which already infected 2,6 million people across the world and killed 91,000 in the country. (Photo by NELSON ALMEIDA / AFP)
Claudia da Costa Leite (L), professor of the Department of Radiology and Oncology, and Marcio Sawamura, the vice-director of the Radiology Institute of the Clinics Hospital, of the Faculty of Medicine of the University of Sao Paulo in Sao Paulo, Brazil, on July 29, 2020, using a new AI-based platform to detect and diagnose COVID-19. Most such platforms failed [Nelson Almeida/AFP]

Failing the test

AI has the potential, at least in theory, to predict the severity of infections and model the spread of outbreaks. The COVID-19 pandemic saw an explosion of AI tools that promised to do just that. But the results were damning.

Two prominent reviews of nearly 650 AI-powered programs for COVID-19 diagnosis and treatment found none of them to be fit for clinical use. Other reviews of AI platforms for forecasting the spread of COVID-19 found them broadly ineffective – likely due, primarily, to issues with data availability.

Those outcomes represent a reality check on AI in healthcare – the tools to actually integrate it into the field of medicine are still nascent.

“Over 95 percent of AI” that Abbasgholizadeh-Rahimi studied in her review “were developed, pilot-tested, then never went to the implementation stage”, she said.

Central to the challenges confronting AI in medicine are three major limitations in the data used to develop it: paucity, access restrictions and quality.

For most AI to function, it needs to be trained on data that has been annotated by experts. Many diseases simply lack enough such data, though several techniques are being researched to reduce AI’s reliance on large sums of expert-annotated data.

Yet, even when there is data, it is not necessarily available to AI developers. Every patient, said Shah, has a medical history with numerous data points: checkups, readouts, diagnoses and prescriptions among others. However, various healthcare organisations – from hospitals to insurance and pharmaceutical companies – log different data points. Thus, medical data gets split and locked in different silos.

On an even larger scale, efforts to leverage AI to model and forecast the spread of the pandemic were hampered by opacity from countries about vital statistics such as infection rates and mortality. Organisations such as the Clinical Research Data Sharing Alliance – a consortium of universities, pharmaceutical firms, patient advocacy groups and nonprofit data-sharing platforms – are trying to push for change. But at the moment, the medical AI data-scape is one of the isolated islands adrift in calls for openness.

Lastly, even when data is present and available, there is the lingering difficulty of extracting quality from infrastructure often ill-designed to provide it. Electronic health records, a primary source of patient data, often offer much noise with the signal, said Abbasgholizadeh-Rahimi.

Noise can take many forms. It can be imaging data annotated in a way that makes it illegible to an AI platform. It can be data formatted or recorded in incompatible ways.

Yet, there are even deeper challenges and risks that AI in healthcare must overcome to emerge as a truly trustworthy partner of the medical community, experts point out.

OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
OpenAI CEO Sam Altman at a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, on Capitol Hill in Washington, Tuesday, May 16, 2023 [Patrick Semansky/AP]

When the AI errs, who do you blame?

Datasets can be biased. Abbasgholizadeh-Rahimi’s analysis of primary healthcare AI research, for instance, found that sex, gender, age and ethnicity were rarely considered. Less than 35 percent of the programmes studied had sex-disaggregated data – datasets collected and tabulated separately for women and men.

Some ethnic groups can be underrepresented or incorrectly emphasised in datasets. Just two years ago, the US National Kidney Foundation and the American Society of Nephrology recommended dropping a racial bias in how blood creatinine was judged that caused the severity of many Black Americans’ kidney failure to be underestimated.

AI tools trained on such biased data and guidelines will likely perpetuate these biases, though Shah argues the “same data quality also affects human decision-making”.

Given the potential for bias and the black-box nature of proprietary neural networks, the medical AI space has seen a growing push for explainable AI or XAI. This movement aims to emphasise the importance of making the reasoning by which an AI tool arrives at a diagnosis, prognosis or treatment recommendations more transparent.

Many experts see explainability as inextricably intertwined with one of the most pressing ethical questions underlying medical AI today: Doctors make mistakes but when AI makes mistakes, who are we going to blame more – the AI or the doctor using it?

Understanding the train of thought behind an AI tool that advises a physician could inform the degree of responsibility each has.

Likewise, the medical AI space is grappling with balancing the responsibility of protecting patient data with the need for more data sharing.

This fear is not unfounded. In the US, the first half of 2023 saw 295 healthcare data security breaches, which have affected 39 million Americans. Cybersecurity breaches aside, healthcare companies have seen no shortage of scandals over sharing patient data improperly or without anonymity. In 2017, London’s Royal Free Hospital was embroiled in controversy over sharing the health data, alongside personal information, of 1.6 million patients, with Google’s DeepMind.

More recently, the US Federal Trade Commission fined popular mental therapy app BetterHelp $7.8m for sharing the information of 7 million consumers with third-party platforms for advertising.

There are no easy answers to privacy concerns. Shah notes that while people might not want to share their data, they are often eager to benefit from an AI trained on others’ data.

Researchers are also working to hone analytics approaches that allow AI tools to train on less of patients’ real-world data than these platforms currently need.

Amid the rapid innovation and spiking investment, this race between medical AI and the infrastructure that informs it could prove decisive in shaping the future of health systems.

At the moment, the infrastructure is playing catch-up. Only if it does can that potential cancer patient in the clinic count on AI truly making an intelligent, accurate and safe prediction.

Source: Al Jazeera