Medical imaging is an important part of modern healthcare, improving both the accuracy, reliability and development of treatment for various diseases. Artificial intelligence has also been widely used to further improve the process.
However, conventional medical image diagnosis using AI algorithms requires large amounts of annotations as supervisory signals for model training. To acquire accurate labels for AI algorithms – radiologists, as part of clinical routine, prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from these reports using human-defined rules and existing natural language processing (NLP). The ultimate accuracy of extracted tags depends on the quality of human work and various NLP tools. The method comes at a high price, being both laborious and time-consuming.
A team of engineers from the University of Hong Kong (HKU) has developed a new “REFERS” (Reviewing Free-text Reports for Supervision) approach, which can reduce the human cost by 90%, by enabling the automatic acquisition of supervision signals of hundreds of thousands of radiology reports at the same time. It achieves high accuracy in predictions, surpassing its counterpart in conventional medical image diagnosis using AI algorithms.
The innovative approach marks an important step towards realizing widespread medical artificial intelligence. The breakthrough was published in Intelligence of natural machines in the article entitled “Generalized learning of radiographic representation via cross-supervision between images and free-text radiology reports”.
“AI-enabled medical imaging diagnosis has the potential to help medical specialists reduce their workload and improve diagnostic efficiency and accuracy, including but not limited to reducing diagnostic time and the detection of subtle disease patterns,” said Professor YU Yizhou, team leader of HKU’s Department of Computer Science under the Faculty of Engineering.
“We believe that the abstract and complex logical reasoning sentences in radiology reports provide enough information to learn easily transferable visual features. With proper training, REFERS directly learn radiographic representations from free-text reports without having to need to involve labor in labeling.” Professor Yu remarked.
For REFERS training, the research team uses a public database with 370,000 X-ray images and associated X-ray reports, of 14 common lung diseases, including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax. The researchers succeeded in building a radiographic recognition model using only 100 radiographs and achieved an accuracy of 83% in the predictions. When the number was increased to 1,000, their model performed astonishingly with an accuracy of 88.2%, which outperformed its trained counterpart with 10,000 radiologist annotations (87.6% accuracy). When 10,000 x-rays have been used, the accuracy is 90.1%. In general, greater than 85% accuracy in predictions is useful in real-world clinical applications.
REFERS achieves the goal by accomplishing two report-related tasks, i.e., report generation and x-ray to report matching. In the first task, REFERS translates the x-rays into textual reports by first encoding the x-rays into an intermediate representation, which is then used to predict the textual reports through a network of decoders. A cost function is defined to measure the similarity between predicted and actual report texts, based on which gradient-based optimization is used to train the neural network and update its weights.
As for the second task, REFERS first encodes the x-rays and free-text reports in the same semantic space, where the representations of each report and its associated x-rays are aligned via contrastive learning.
“Compared to conventional methods that rely heavily on human annotations, REFERS has the ability to gain insight from every word in radiology reports. We can dramatically reduce the amount of data annotations by 90% and the construction cost of medical artificial intelligence. This marks an important step towards realizing widespread medical artificial intelligence,” said the paper’s first author, Dr. ZHOU Hong-Yu.