A team of Google researchers has developed a Machine Learning (ML) and Augmented Reality (AR)-powered microscope that can help in real-time detection of cancer and save millions of lives.
In the annual meeting of the American Association for Cancer Research (AACR) in Chicago, Illinois on Monday, Google described a prototype Augmented Reality Microscope (ARM) platform that can help accelerate and democratise the adoption of deep learning tools for pathologists around the world.
The platform consists of a modified light microscope that enables real-time image analysis and presentation of the results of ML algorithms directly into the field of view.
The ARM can be retrofitted into existing light microscopes around the world, using low-cost, readily-available components, and without the need for whole slide digital versions of the tissue being analysed.
“In principle, the ARM can provide a wide variety of visual feedback, including text, arrows, contours, heatmaps or animations, and is capable of running many types of machine learning algorithms aimed at solving different problems such as object detection, quantification or classification,” Martin Stumpe, Technical Lead and Craig Mermel, Product Manager, Google Brain Team, wrote in a blog post.
Applications of deep learning to medical disciplines including ophthalmology, dermatology, radiology, and pathology have shown great promise.
“At Google, we have also published results showing that a convolutional neural network is able to detect breast cancer metastases in lymph nodes at a level of accuracy comparable to a trained pathologist,” the post said.
However, because direct tissue visualization using a compound light microscope remains the predominant means by which a pathologist diagnoses illness, a critical barrier to the widespread adoption of deep learning in pathology is the dependence on having a digital representation of the microscopic tissue.
Modern computational components and deep learning models, such as those built upon open source software “TensorFlow”, will allow a wide range of pre-trained models to run on this platform.
The Google team configured ARM to run two different cancer detection algorithms — one that detects breast cancer metastases in lymph node specimens and another that detects prostate cancer in prostatectomy specimens.
While both cancer models were originally trained on images from a whole slide scanner with a significantly different optical configuration, the models performed remarkably well on the ARM with no additional re-training, the Google Brain Team noted.
“We believe that the ARM has potential for a large impact on global health, particularly for the diagnosis of infectious diseases, including tuberculosis and malaria, in the developing countries,” Google noted.