PRATIK SHAH

Generative Deep Learning for Medical Images

Generative Deep Learning for Medical Images

Research studies led by Dr. Shah in his laboratory have created new paradigms for using low-cost images captured using simple optical principles for point-of-care clinical diagnosis, and reducing dependence on specialized medical imaging devices, biological and chemical processes. Recent peer-reviewed publications have communicated interpretable systems and methods for clinical translation of generative, predictive and classification algorithms that can obtain medical diagnostic information of cells, tissues, and organs. For example:

  • Generalizability of deep learning models for segmenting complex patterns from images are not well understood and are based on anecdotal assumptions that increasing training data improved performance. Research findings led by Dr. Shah published in a Cell Reports Methods paper reported a novel an end-to-end toolkit for improving generalizability and transparency of clinical-grade DL architectures. Researchers and clinicians can use this toolkit to identify hidden patterns embedded in images and overcome under specification of key non-disease and clinical labels causal to decreasing false-positive or negative outcomes in high-dimensional learning systems. The key findings from this study focussed on the evaluation of medical images, but these methods and approach should generalize to all other RGB and gray scale natural-world image segmentations. Methods for benchmarking, visualization, and validation of deep learning models and images communicated in this study have wide applications in biomedical research and uncertainty estimations for regulatory science purposes. (Project and publication link)
  • In a collaboration led by Dr. Shah with Brigham and Women’s Hospital in Boston, MA, a novel “Computational staining” system to digitally stain photographs of unstained tissue biopsies with Haematoxylin and Eosin (H\&E) dyes to diagnose cancer was published. This research also described an automated “Computational destaining” algorithm that can remove dyes and stains from photographs of previously stained tissues, allowing reuse of patient samples. This method used neural networks to help physicians provide timely information about the anatomy and structure of the organ and saving time and precious biopsy samples. (Project and publication link)
  • In a collaboration led by Dr. Shah with Stanford University School of Medicine and Harvard Medical School, several novel mechanistic insights and methods to facilitate benchmarking and clinical and regulatory evaluations of generative neural networks and computationally H\&E stained images were reported. Specifically, high fidelity, explainable, and automated computational staining and destaining algorithms to learn mappings between pixels of nonstained cellular organelles and their stained counterparts were trained. A novel and robust loss function was devised for the deep learning algorithms to preserve tissue structure. This research communicated that virtual staining neural network models developed in Dr. Shah’s research lab were generalizable to accurately stain previously unseen images acquired from patients and tumor grades not part of training data. Neural activation maps in response to various tumors and tissue types were generated to provide the first instance of explainability and mechanisms used by deep learning models for virtual H\&E staining and destaining. And image processing analytics and statistical testing were used to benchmark the quality of generated images. Finally, the computationally stained images were successfully evaluated for prostate tumor diagnoses with multiple pathologists for clinical decision-making. (Project and publication link)
  • In a research study led by Dr. Shah, a complementary end-to-end deep learning framework for automatic classification, and localization of prostate tumors from non-stained and virtual H\&E stained core biopsy images was developed. A computationally H\&E stained patch was first generated from a non-stained input image using the generative models described above and then was fed into a Resnet-18 classifier for classification as tumor or no tumors. A deep weekly-supervised learning gradient backpropogation (GBP) algorithm was used to localize class-specific (tumor) regions on images outputted from the Resnet-18 classifier. If an input image patch was classified as tumor, the GBP localization module generates a saliency map) locating the tumor regions on computationally stained images. The core contributions were to extend the utility and performance of generative virtual H\&E staining deep learning methods, models and computationally H\&E stained images for tumor localization and classification. (Publication link)
  • In a collaboration led by Dr. Shah with Beth Israel Deaconess Medical Center in Boston, MA, the use of dark field imaging of capillary bed under the tongue of consenting patients in emergency rooms for diagnosing sepsis (a blood borne bacterial infection) was investigated. A neural network capable of distinguishing between images from non-septic and septic patients with more than 90% accuracy was reported for the first time. This approach can rapidly stratify patients and offer rational use of antibiotics and reduce disease burden in hospital emergency rooms and combat antimicrobial resistance. (Project and publication link)
  • Dr. Shah led research studies that showed that signatures associated with fluorescent porphyrin biomarkers (linked with tumors and periodontal diseases) were successfully predicted from standard white-light photographs of the mouth, thus reducing the need for fluorescent imaging at the point-of-care. (Project and publication link)
  • Research studies led by Dr. Shah reported automated segmentation of oral diseases by neural networks from standard white-light photographs and correlations of disease pixels with systemic health conditions such as optic nerve abnormalities in patients for personalized risk scores. (Project and publication link, Project and publication link)

Examples described in this research area highlight contributions from Dr. Shah and his lab towards designing next-generation of computational medicine algorithms and biomedical processes that can assist physicians and patients at the point-of-care.

2021

A deep-learning toolkit for visualization and interpretation of segmented medical images

Sambuddha Ghosal and Shah P*

( *Senior author supervising research)

Cell Reports Methods 1, 100107, 2021

Read More >>

2021

Uncertainty quantified deep learning for predicting dice coefficient of digital histopathology image segmentation

Sambuddha Ghosal, Xie A, Shah P\*

( *Senior author supervising research)

arXiv:2011.05791 [stat.ML]

2018

Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks

Aman Rana, Yauney G, Lowe A, Shah P*

(*Senior author supervising research)

17th IEEE International Conference of Machine Learning and Applications. DOI: 10.1109/ICMLA.2018.00133

Read More >>

2020

Use of deep learning to develop and analyze computational hematoxylin and eosin staining of prostate core biopsy images for tumor diagnosis

Aman Rana, Lowe A, Lithgow M, Horback K, Janovitz T, Da Silva A, Tsai H, Shanmugam V, Bayat A, Shah P*

(*Senior author supervising research)

JAMA Network. DOI: 10.1001/jamanetworkopen.2020.5111

Read More >>

2021

Automated end-to-end deep learning framework for classification and tumor localization from native non-stained pathology images

Akram Bayat, Anderson C, Shah P*

(*Senior author supervising research, Selected for Deep-dive spotlight session)

SPIE Proceedings. DOI: 10.1117/12.2582303

2018

Machine learning algorithms for classification of microcirculation images from septic and non-septic patients

Perikumar Javia, Rana A, Shapiro NI, Shah P*

(*Senior author supervising research)

17th IEEE International Conference of Machine Learning and Applications. DOI: 10.1109/ICMLA.2018.00097

Read More >>

2017

Convolutional neural network for combined classification of fluorescent biomarkers and expert annotations using white light images

Gregory Yauney, Angelino K, Edlund D, Shah P*

(*Senior author supervising research, Selected for oral presentation)

17th annual IEEE International Conference on BioInformatics and BioEngineering. DOI: 10.1109/BIBE.2017.00-37

Read More >>

2017

Automated segmentation of gingival diseases from oral images

Aman Rana, Yauney G, Wong L, Muftu A, Shah P*

(*Senior author supervising research)

IEEE-NIH 2017 Special Topics Conference on Healthcare Innovations and Point-of-Care Technologies. DOI: 10.1109/HIC.2017.8227605

Read More >>

2019

Automated process incorporating machine learning segmentation and correlation of oral diseases with systemic health

Gregory Yauney, Rana A, Javia P, Wong L, Muftu A, Shah P*

(*Senior author supervising research)

41st IEEE International Engineering in Medicine and Biology Conference. DOI: 10.1109/EMBC.2019.8857965


Pratik Shah invited to speak at TEDGlobal 2017
Event
Pratik Shah invited to speak at TEDGlobal 2017

August 27, 2017 - August 30, 2017
Arusha

Pratik Shah @ TED2020
Event
Pratik Shah @ TED2020

At TED2020, we're inviting you on a bold voyage into uncharted territory.

May 18, 2020 - July 10, 2020
Vancouver, Canada

Pratik Shah @ TED2019
Event
Pratik Shah @ TED2019

April 15, 2019 - April 19, 2019
Vancouver, Canada

Two publications from Dr. Pratik Shah's lab at IEEE ICMLA
Event
Two publications from Dr. Pratik Shah's lab at IEEE ICMLA

December 18, 2018 - December 19, 2018
Orlando, FL

Event
Pratik Shah invited to speak at View Conference 2016

October 24, 2016 - October 28, 2016
Turin, Italy