Search
Menu
Zurich Instruments AG - Lock-In Amplifiers 4/24 LB

Histology Breaks a Speed Barrier

Facebook X LinkedIn Email
An integrated, automated microtome and microscopy platform sections and images tissue samples thousands of times faster than a human.

CODY DANIEL, 3SCAN

High-bandwidth sensing has brought data volumes previously unconsidered into everyday tools. Tightly coupled with this are modern, powerful computational approaches, including machine learning and scalable computation, that enable intelligent data interpretation with virtually no human in the loop. Today’s optical instruments, which have integrated software and compute layers, are outpacing human bandwidth and forcing their human designers and end users to operate more like orchestrators of complex, interlocked components.

These methods are being applied to the world of bio-instrumentation as traditional methods fail to elucidate answers to the complex mechanisms of emergent biological states. Coupling high-bandwidth computational systems, outside the direct control and bias of human designers, directly to biological substrates results in extraordinarily novel outcomes and discoveries.

The development of modern, advanced biophotonic instruments has already surpassed the data sizes involved in the genomics work that kicked off the era of big data and photonics. Light sheet, fMRI, µCT, OCT and even microscopy platforms generate datasets that regularly surpass 10 to 100 GB in size, with some of the latest tools running into the terabyte domain. Pure photonics instruments are coupling ever-faster silicon imaging to ever-faster silicon processors, resulting in absolutely massive data rates.

There is immediate value in gathering large datasets and presenting them to researchers and clinicians. Pathology presents an apt industry case study. Microscopic tissue analysis is straightforward and robust. However, the time-intensive nature of the workflow of manual sectioning and microscopic examination limits the volume of tissue that can be examined. It also reduces the inherent 3D context of tissue to a narrow 2D perspective. A biopsy measuring a few millimeters in scale is expected to be well-characterized using several micron-thick sections.

There are very real consequences to this limited sampling. Take the example of prostate cancer biopsies. Present methodology dictates using a needle-core biopsy to extract a small, millimeter-thick sample of the prostate. After several sections are stained and mounted to glass for light microscopy, a pathologist compares the sample against his or her own intuition and experience to generate a Gleason grade, a rubric for stratifying prostate cancer severity. With Gleason grade 3+3, the patient will receive “conservative treatment,” which typically entails frequent observation and sampling. Any Gleason grade above 3+3 means the patient will likely receive more aggressive therapy, such as prostatectomy (i.e., surgical removal of prostate tissue) and/or chemoradiation.

Researchers from the University of Washington recently published an article highlighting problems similar to the above mentioned. To address sampling issues, Jonathan Liu and colleagues examined an entire prostate needle-core biopsy using light sheet fluorescence microscopy (LSFM), a technique that uses a combination of tissue clearing, targeted stains and 3D reconstruction to produce volumetric representation of the entire sample.

A group of pathologists were then asked to evaluate these samples, using both traditional glass slides and the LSFM-derived 3D data sets. Gleason grades varied between 3+3 and 3+4, depending on 2D sections, and agreement between pathologists was achieved only 50 percent of the time on a given section. However, when presented with the LSFM-derived 3D data sets, the pathologists were unanimous in their grading of a 3+3; factors that appeared to indicate more severe disease states were resolved as erroneous perceptions from the limited 2D viewpoint.

Traditional sampling bias not only limits the ability to see everywhere in a specimen, it also strips away important spatial context of what is fundamentally a 3D biological environment by forcing a narrow 2D picture of what’s under examination. This needlessly can result in permanent life-altering changes for patients.

Quality assessment in the clinical setting, via concordance rate studies, also routinely demonstrates interpretation disagreement among pathologists as high as 20 percent when presented with the same slides. Routine reassessments of case slides in pathology labs show similar results. In addition, there is a growing demand for ever-lower error rates and faster turnaround times in clinics.

Altogether, there is significant need for a method with comprehensive, quantitative imaging across large volumes of tissue. Further, modern data analysis is opening opportunity for large-scale image processing and statistical models of complex problems.

3Scan develops instruments to further the ability to answer complex research questions. The first instrument is the knife-edge scanning microscope (KESM). At its core, a KESM is an integrated, automated microtome and microscopy platform that is able to section and image tissue samples thousands of times faster than a human.

To accomplish this, several of the traditional steps of histopathology workflow are rearranged. First, a sample is fixed and then immediately treated to a whole-mount staining technique, providing contrast throughout the entire specimen. Then, it is embedded in paraffin or resin and loaded onto the KESM. From there, a diamond-knife-equipped microtome performs a serial-sectioning operation. A high-precision, 3-axis Aerotech ANT130L linear motor stage moves the sample against the knife, removing sections as thin as 1 µm in resin and 5 µm in paraffin.

The posterior aspect of the knife is coupled to an LED light source, which transmits light through the bevel of the blade and illuminates the section as it is produced. Pointed at this knife edge is a custom microscope optics train assembly. The optics are a contract-manufactured Leica 10×, 0.5 NA, diffraction-limited, 5-mm field-of-view assembly, oriented perpendicular to the knife bevel. Behind this sits a Teledyne Dalsa line-scanning camera. The Piranha XL color camera has a 70-kHz line rate, a 16k × 12 resolution, and a 5-µm pixel size with a high-speed CLHS interface.

Integrating a high-speed camera with a high-precision stage enables the generation of sequential images that are coregistered to submicron levels, greatly easing data handling by providing an absolute reference frame. Tightly controlled synchronization and high-throughput data storage are core technical challenges of the system, as accurate, submicron coregistration of teravoxels across tens of thousands of sections is necessary for the generated imagery to be accurate.

Alluxa - Optical Coatings MR 8/23

Once an entire sample is sectioned and imaged, generating as many as 50,000 sections and as much as 10 TB of imagery in a few hours, the data is ready for processing. Images are stitched together into coherent faces, various artifacts removed, and the entire stack is sent to various data pipelines. Occasionally, the data is rendered into movies, providing a rapid fly-through of the data. Other times data is rendered into meshes, which are 3D models of tissue features. Other pipelines generate a skeleton of a tissue-specific network, allowing quantitative metrics to be derived about a feature. In the future, more statistical models and interpretations will be available to extract quantitative metrics from a given data set.

Building these software tools is challenging. The scale of data is unprecedented in medical imaging exercises. A common stress test in the medical imaging world would be an MRI dataset measuring 50 GB. These new techniques deal with datasets measuring 5 TB, 100× larger. Challenges in manipulating, analyzing and even storing the data are ever present. All storage and computation is executed using Amazon’s cloud computation and data handling.

Beyond the challenges of handling large datasets is the need to present the data as meaningful information. As researchers begin requiring more massive datasets to draw conclusions, it becomes impossible for any one person to examine all of the data and derive insights.

A 3D microscopic dataset may contain 5000 layers, with each layer requiring a dozen high-resolution monitors to display. Drawing insights from such a massive set of images and correlating with other image sets is an enormous undertaking. 3Scan is also embarking on machine learning techniques to provide methods to tackle such image sets, finding particular regions of interest and highlighting them to investigators based on metrics such as vascular network characteristics, cell density or finding sparse features.

Altogether, the integrated workflow provides an avenue for interrogations of biological substrates from macro-scale organs to micron-scale cells. For example, 3Scan is currently developing sophisticated algorithms for analyzing vascular networks of a specimen that aim to highlight the network of blood vessels within a sample. This workflow uses the KESM and automated algorithmic image processing tools to describe the graph features of the vasculature within a sample. Using this, researchers investigating angiogenesis inhibitors can compare network topologies through key metrics such as branch frequency, vessel size, terminations and more (Figure 1).

Maximum intensity projection of the vasculature within a whole murine brain.

Figure 1.
Maximum intensity projection of the vasculature within a whole murine brain. Courtesy of 3Scan.

Another application uses traditional hemotoxylin and eosin (H&E) staining to look for cell density and shape. These metrics can identify potential cancerous cells or detect anomalous tissue structures. As the understanding of diseases such as cancer evolves, subtle impacts of varied factors becomes apparent. Current research seeks to understand the tumor microenvironment, acknowledging that every tumor is a heterogeneous ensemble of mutations and cell types. Finding which subtype breached into a blood vessel can help determine which specific cancer type has metastasized and inform precision medicine approaches to treating that cancer (Figure 2).

Focus of lymphovascular invasion within a patient-derived xenograft model of adenocarcinoma.

Figure 2.
Focus of lymphovascular invasion within a patient-derived xenograft model of adenocarcinoma. This feature was identified 500 sections deep within the tissue specimen, suggesting that it would have gone otherwise undetected by traditional histopathology methods. Courtesy of 3Scan.

Outside of these cases are even broader understandings of anatomy and atlases within tissue. Regenerative medicine is intent on growing or manufacturing organs viable for transplant. However, researchers are rapidly running into limitations on gross understanding of tissue and organ structures. Almost all atlases are cartoon animations derived from a few microscopic sections and someone’s intuition. When it comes to creating a controlled specification on organ shape, size, tissue structures and cell type distributions, there are simply few (if any) models available. Without a model, manufacture is limited, to say nothing of quality control and diagnostic.

Right now, the first complete mesh models of human lung organs are being generated. These models will serve as the basis for 3D prints of organs intended for recellularization and eventual transplantation. Meshes are generated through a mixture of methods and modalities, highlighting the combined instrumentation approaches necessary for tackling problems at this scale. Microscope KESM data of cartilage rings around the trachea, for example, are combined with bronchial image data from the Visible Human Project to create a unified model of the entire organ (Figure 3).

Mesh model of human bronchial tree and pulmonary vasculature.

Figure 3.
Mesh model of human bronchial tree and pulmonary vasculature. Courtesy of 3Scan.

Central to solving these problems is utilizing high-bandwidth methods capable of meeting the data challenge. Simply put, having cellular-level resolution across organ-scale structures requires massive sensing on reasonable timescales, necessitating high-bandwidth imaging, storage, interpretation and display.

Meet the author

Cody Daniel is the director of research and co-founder of 3Scan. He works at the cross section of motion and matter, focused primarily on precision machine and instrument design; email: [email protected].

Published: January 2018
Glossary
knife-edge scanning microscope
An imaging device originally created to image whole mouse brain volumes at microscopic resolution. The main component of the instrument is an automated microtome and microscope capable of producing large numbers of images in a short period of time. Prestained tissue blocks are sectioned by a diamond knife that is back-illuminated with a light source. As sections move over the knife illumination, they are scanned by the integrated microscope, producing volumetric image datasets with high...
FeaturesMicroscopyBiophotonicsCody Daniel3Scanknife-edge scanning microscope

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.