Research

New AI research to help predict COVID-19 resource needs from a series of X-rays

January 15, 2021

Researchers, healthcare providers, and many others around the world are still grappling with COVID-19. Even a year into the pandemic, it remains challenging for doctors to predict how a patient’s condition may change over the course of the disease. Will the patient improve in the next few days or worsen to the point where more intensive care is needed? With resources under unprecedented strain, it’s important that hospitals know whether patients are likely to need escalated treatment and plan accordingly.

As part of our ongoing collaboration with NYU Langone Health’s Predictive Analytics Unit and Department of Radiology, we have developed three machine learning (ML) models that could help doctors predict how a patient’s condition may develop, in order to help hospitals ensure they have sufficient resources to care for patients: 1) a model for predicting patient deterioration based on a single X-ray, 2) a model for predicting patient deterioration based on a sequence of X-rays, and 3) a model for predicting how much supplemental oxygen (if any) a patient might need based on a single X-ray.

Something Went Wrong
We're having trouble playing this video.

Our model using sequential chest X-rays can predict up to four days (96 hours) in advance if a patient may need more intensive care solutions, generally outperforming predictions by human experts. These predictions could help doctors avoid sending at-risk patients home too soon, and help hospitals better predict demand for supplemental oxygen and other limited resources.

We are open-sourcing our pretrained models and publishing our research so the broader community can benefit from and build on what we’ve done.

Leveraging self-supervised learning

Previous approaches to this problem have relied on supervised training and used single timeframe images. While progress has been made with supervised training methods, labeling data is extremely time-intensive and thus limiting. We chose instead to pretrain our ML system on two large, public chest X-ray data sets, MIMIC-CXR-JPG and CheXpert, using a self-supervised learning technique called Momentum Contrast (MoCo). This allowed us to use large amounts of non-COVID chest X-ray data to train a neural network that could extract information from chest X-ray images. Then we fine-tuned the MoCo model using an extended version of the NYU COVID-19 data set.

MoCo relies on unsupervised learning using a contrastive loss function, mapping images to a latent space wherein similar images are mapped to vectors that are close together and dissimilar images to vectors that are further apart. These vectors can be used as feature representations, allowing one to train classifiers using a small number of labeled examples. Recent research shows that self-supervised learning using contrastive loss functions is effective in a variety of classification tasks.

Something Went Wrong
We're having trouble playing this video.

After pretraining the MoCo model on MIMIC-CXR-JPG and CheXpert, we then used the pretrained model to build classifiers that could predict whether a COVID-19 patient’s condition is likely to deteriorate. As mentioned, we used the NYU COVID chest X-ray data set for fine-tuning, as it contained 26,838 X-ray images taken from 4,914 patients. This smaller data set was labeled with whether the patient’s condition worsened within 24, 48, 72, or 96 hours of the scan in question.

We built two kinds of classifiers to predict patient deterioration. The first model predicts patient deterioration based on a single X-ray in a fashion similar to a previous study. The second model predicts patient deterioration based on a sequence of X-rays by aggregating the image features via a Transformer model.

Helping with resource planning

Using self-supervised learning without having to rely on labeled data sets is crucial, as few research groups have enough COVID chest X-rays to train AI models. Building AI models that can use a sequence of X-rays for prediction purposes is particularly valuable because this method mirrors how human radiologists work, as using a sequence of X-rays is more accurate for long-term predictions. Importantly, this method also accounts for the evolution of COVID infections over time.

Based on reader studies that we conducted with radiologists at NYU Langone, our models that used sequences of X-ray images outperformed human experts at predicting ICU needs and mortality predictions, and overall adverse event predictions in the longer term (up to 96 hours). Being able to predict whether a patient will need oxygen resources would also be a first, and could help hospitals as they decide how to allocate resources in the weeks and months to come. With COVID-19 cases rising again across the world, hospitals need tools to predict and prepare for upcoming surges as they plan their resource allocations. Our models could help.

“We have been able to show that with the use of this AI algorithm, serial chest radiographs can predict the need for escalation of care in patients with COVID-19,” says William Moore, MD, a Professor of Radiology at NYU Langone Health. “As COVID-19 continues to be a major public health issue, the ability to predict a patient’s need for elevation of care — for example, ICU admission — will be essential for hospitals.”

These models are not products, but rather research solutions, intended to help hospitals in the days and months to come with resource planning. While hospitals have their own unique data sets, they often don’t have the computational power necessary to train deep learning models from scratch. We are open-sourcing our pretrained models (and publishing our results) so that hospitals with limited computational resources can fine-tune the models using their own data — work that can be done with a single GPU.

Both NYU Langone Health and Facebook AI remain committed to the principles of open science, and hope that by releasing this research, hospitals and the community at large can build upon what we’ve done so far — and that our models help the experts make crucial decisions and better serve patients with their limited time and resources.

Read the full paper:

COVID-19 prognosis via self-supervised representation learning and multi-image prediction

Pretrained models available on GitHub

Written By

Anuroop Sriram

Research Engineer

Matthew Muckley

Research Engineer

Koustuv Sinha

Research Assistant

Nafissa Yakubova

Program Manager