Search
Menu
BAE Systems Sensor Solutions - Fairchild - Thermal Imaging Solutions 4/24 LB

3 Questions with Keith Paulsen

Facebook X LinkedIn Email
Keith Paulsen 

BioPhotonics spoke with Keith Paulsen, the Robert A. Pritzker Professor of Biomedical Engineering at Dartmouth College’s Thayer School of Engineering. Paulsen and his colleagues recently published a paper (www.doi.org/10.1364/optica.446576) about the use of a deep learning algorithm to accelerate the reconstruction of tissue images by combining scans from magnetic resonance imaging (MRI) together with diffused optical signals from near-infrared spectral tomography (NIRST). The resulting images have the potential to reveal whether breast cancer tumors are malignant or benign.

Is it correct that it is currently possible to combine MRI and NIRST data, but only through an extensive process involving contrast agents or light propagation models that has proved impractical in the normal medical workflow?

Yes, combining MRI and NIRST data is possible and can be achieved in several ways. We first used long, heavy fiber optic cables that were attached to the breast through a special MRI-compatible holder and connected to photomultiplier tube detectors located outside the scanner room. Other approaches are also possible. And while they do not necessarily involve contrast agents, they rely on complicated light propagation models that involve lengthy computations that required complex geometrical inputs to define breast geometry and source-detector locations, etc. In addition, some of these methods need manual segmentation of the breast MRI images in order to identify regions of interest to optimize NIRST image reconstruction. Manual image segmentation is time-consuming and requires expertise to define breast tissue types, including potential tumors. Accordingly, prior hardware and software systems both introduced considerable challenges to the normal medical workflow associated with breast MRI.

Bristol Instruments, Inc. - 872 Series High-Res 4/24 MR

What types of light sources and phantoms were used in your experiments, and was open-source software used during the creation of the algorithm?

We used diode lasers as light sources. Although we have many different types of phantoms (such as agarose, liquid, or silicon phantoms with different shapes) that are used for imaging system calibration and validation, in this paper, all phantoms were simulated circular domains with randomly assigned tissue optical properties and diameters. We used our open-source software, NIRFAST, to create the simulation data sets that were used to train our neural network. The software was developed for simulating light propagation in tissue and performing image reconstruction for NIRST. The deep learning-based reconstruction reported in the paper represents a conceptual breakthrough in multimodality image reconstruction, certainly in the MRI plus NIRST setting, and it is not yet available within our NIRFAST software platform. In this deep learning-based algorithm (Z-Net) for MRI-guided NIRST image reconstruction, diffused optical signals and MRI images were both used as the input to the neural network, which simultaneously recovered the concentrations of oxyhemoglobin, deoxyhemoglobin, and water via end-to-end training. This new approach can not only be used for MRI-guided NIRST image reconstruction, but it also can be adapted into any other real-time multimodality image reconstructions.

Do you see uses for this system outside breast exams, and what are the prospects for commercialization?

Besides the breast, the imaging system and approach might be used in the brain, for head and neck cancers, thyroid cancer and soft tissue sarcoma for pathology detection and characterization, and therapy monitoring, among other indications. The deep learning-based image reconstruction described in the paper is applicable to image formation problems where biophysical mathematical models are used as the framework for generating images of tissue property parameters from sensor data acquired by the imaging instrumentation. Thus, before commercialization of the entire MRI-compatible NIRST imaging system, the new image reconstruction approach might be commercialized first by adapting to existing multimodality imaging systems.

Published: April 2022
Glossary
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
3 Questionsdeep learningMRINIRSTdiode lasersphantomshemoglobinbreast cancerKeith Paulsen

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.