Follow Datanami:
November 16, 2017

AI Excels at Detecting Pneumonia in X-Rays

You can add pneumonia detection to the list of things that artificial intelligence is now better at than humans.

A group of researchers at Stanford University’s Machine Learning Group this week published a paper demonstrating how they used a deep learning technique to train a computer to automatically detect instances of pneumonia from X-ray images with a higher degree accuracy than highly trained human radiologists.

Dubbed CheXNet, the deep learning setup is based on a 121-layer convolutional neural network that was trained on sample of the 112,000 chest X-rays that make up ChestX-ray14, the largest such collection of frontal X-rays in the world. Andrew Ng, the renowned Stanford professor and machine learning expert, was among the group of researchers involved in the experiment.

After training the computer vision system to detect 14 different kinds of diseases on the data set, the researchers then asked CheXNet to identify whether or not pneumonia was present in a sample of 420 images taken from the dataset.

The researchers then compared the performance of the CheXNet machine against human experts trained to spot pneumonia and other diseases in chest X-rays. The human performance was obtained by analyzing the annotations that four practicing academic radiologists made on the ChestX-ray14 dataset.

The red Xs mark actual human radiologist performance, the green X is the average, and the blue line is the CheXNet neural network

The results were close, but a careful analysis showed that CheXNet outperformed the average human radiologist across both axes, including sensitivity, which measures the proportion of positives that are correctly identified as such, and specificity, which measures the proportion of negatives that are correctly identified as such.

The academic exercise could have ramifications in the real world for two main reasons: pneumonia can be hard to detect in X-rays, and about two-thirds of the world lack radiologists who are sufficiently trained to detect it.

“The appearance of pneumonia in X-ray images is often vague, can overlap with other diagnoses, and can mimic many other benign abnormalities. These discrepancies cause considerable variability among radiologists in the diagnosis of pneumonia,” the authors of the report write. “With automation at the level of experts, we hope that this technology can improve healthcare delivery and increase access to medical imaging expertise in parts of the world where access to skilled radiologists is limited.”

Andrew Ng, one of 12 authors of the report, took to Twitter to spread the news about the publication of the report, which can be viewed from the Cornell University Library. “Should radiologists be worried about their jobs?” he writes. “We can now diagnose pneumonia from chest X-rays better than radiologists.”

Another interesting aspect of the experiment was how the Stanford ML Group composed the machine learning system. They opted to build CheXNet using a dense convolutional neural architecture.

So-called DenseNets “improve flow of information and gradients through the network, making the optimization of very deep networks tractable,” the researchers write. “We replace the final fully connected layer with one that has a single output, after which we apply a sigmoid nonlinearity, outputting the probability that the image contains pneumonia.”

While the subject of the paper was pneumonia detection, the CheXNet algorithm showed good results across all 14 types of diseases that are labeled in ChestX-ray14 data set. The results show that CheXNet outperformed two other machine learning experiments in the detection of all 14 pathology types, including atelectasis, cardiomegaly, consolidation, edema, effusion, emphysema, fibrosis, hernia, infiltration, mass, nodule, pleural thickening, pneumonia, and pneumothorax.

From “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning”

“We find that CheXNet achieves state of the art results on all 14 pathology classes,” the researchers write.

To accurately detect pneumonia, the CheXNet neural network needed to learn how to correctly identify features in the images and correlate them with actual health conditions, such as the presence of air spaces in the lung, the presence of fluid, large masses, small nodules, and enlarged hearts.  The existence of the ChestX-ray 14 dataset, which was published earlier this year, plays an important role in this capability because it is an order of magnitude bigger than previous collections of X-rays.

The potential impact that the CheXNet algorithm could have on the identification of pneumonia and other diseases is potentially large. The researchers note that more than 2 billion chest X-rays are taken every year, and that more than 1 million adults are hospitalized with pneumonia annually.

Considering that about 50,000 people die every year from pneumonia, finding a way to automate accurate diagnoses using computers could potentially help many people get treatment before it’s too late.

You can read the full report, which is titled “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning,” here.

Related Items:

Tracking the Opioid-Fueled HIV Outbreak with Big Data

How Spark and Hadoop Are Advancing Cancer Research

Fighting Sepsis with Real-Time Analytics

Datanami