5 ways AI is being used to advance cancer research

AI and its various applications are redefining the way scientists approach cancer research, according to a review published in Drug Discovery Today.

Tumors and the treatments they require are inherently complex, authors Vaishali Y. Londhe and Bhavya Bhasin, both of the Shobhaben Pratapbhai Patel School of Pharmacy & Technology Management in Mumbai, India, said in the paper, but AI is transforming the way oncologists look at cancer management.

“The uniqueness of cancers makes the mapping of their progression and early diagnosis difficult,” they wrote. “Deep learning has been applied successfully to areas that were previously difficult to understand and is setting new standards of cancer care.”

These are five areas where Londhe and Bhasin think AI is making the greatest dent in oncology:

1. Diagnosing metastases

Diagnosing skin cancer typically involves clinical screening and dermascopic analysis followed by a biopsy and histopathological analysis, Londhe and Bhasin said, but recent advances in AI have paved the way for a less time-consuming approach. A 2017 study conducted by Esteva et al. and published in Nature used 129,450 clinical images of skin cancer to train a convoluted neural network (CNN) to identify and classify cancers, and the AI was ultimately able to detect malignancies as well as dermatologists.

Another group of researchers from Oregon State University used deep learning to extract information from gene expression data that helped them classify different types of breast cancer cells, revealing new biomarkers for breast cancer detection.

2. Segmenting tumors

Analyzing tumor volume is the step in cancer treatment that immediately follows diagnosis, but traditional methods used by radiologists—namely response evaluation criteria in solid tumors, or RECIST—are slow and can be off by almost 50%.

Scientists have used CNNs to segment brain tumors, liver tumors and optic path gliomas to a greater degree of accuracy. In the liver cancer study, a team used CNNs for the segmentation of liver tumors in follow-up CTs, inputing a baseline CT scan, delineating of the CT scan and follow-up scan into the CNN to achieve automated segmentation.

“A major advantage of CNNs over semiautomatic methods is that the need to customize handcrafted features is obviated because of their ability to automatically identify features,” Londhe and Bhasin wrote.

3. Applying precision histology

The authors said histomorphology has been “revolutionized” by precision histology, a type of deep learning. While pathology and diagnostics have relied for years on the accurate interpretation of H&E-stained slides—interpretation that can be slow and unreliable—deep neural networks (DNNs) are applying algorithms that can speed up the process. DNNs have already been used to analyze skin lesions with a similar accuracy to practicing dermatologists, deconstructing images into pixels and aggregating them to form reproducible characteristics that yield some kind of diagnostic pattern.

“It is likely that DNNs will soon be capable of more accurate analyses based on H&E slides owing to the developments in high-throughput whole-slide scanning technologies,” Londhe and Bhasin said. “This will also lead to the development of a new biological data pool, which will further aid precision oncology.”

4. Tracking tumor development

Deep learning has also been applied to tracking tumor development. Researchers at the Fraunhofer Institute for Medical Image Computing in Germany developed a deep learning model that updates itself and becomes more accurate as it reads more CTs and MRIs, and the software also allows for easy image comparison to track tumor development between a patient’s clinic visits.

Londhe and Bhasin wrote the approach will be most helpful in detecting cancers of the bone, ribs and spine, since such tumors are often overlooked because of time constraints.

5. Assessing stages of cancer

Analyzing a patient’s cancer stage is crucial for prognosis, but the authors said the conventional process for assessment “is associated with various limitations.” As an alternative, researchers developed a prediction model that used deep learning to predict the survival rate of patients who had undergone a gastrectomy.

“The deep learning-based prognosis detection had a superior prediction ability compared with predictions based on the regular Cox regression,” Londhe and Bhasin wrote. “It showed that deep learning can provide a more individualized and precise risk-based stratification.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Trimed Popup
Trimed Popup