Heart imaging software differs in performance
Coronary artery disease is the leading cause of death in the US. Early and accurate detection and monitoring are important to help reduce risk. Cardiac single-photon-emission computed tomography (SPECT) is one of the most commonly used detection methods because it can image the regional blood flow and function of the heart muscle at rest and under stress. Because visual interpretation of SPECT images is subjective, time-consuming and dependent upon the observer’s expertise, automated software packages have been created to provide more consistent, efficient and accurate results.
Dr. Mathews B. Fish from Sacred Heart Medical Center in Eugene, Ore., and colleagues at Cedars-Sinai Medical Center in Los Angeles and at the University of Oregon in Eugene discovered that these software packages are not consistent with one another and that they provide very different results, which can make comparing images over time in the same patient difficult.
As reported in the January/February 2008 issue of the Journal of Nuclear Cardiology, the researchers tested 328 patients who had been referred for rest/stress cardiac SPECT imaging for coronary artery disease. Of these, 188 patients with a high risk for heart disease underwent coronary angiography after SPECT imaging. The other 140 patients were considered “low likelihood” for heart disease (less than a 5 percent chance) and did not undergo angiography.
Patients were imaged with SPECT dual-detector scintillation cameras from Philips Medical Systems of Bothell, Wash., both at rest and during treadmill testing or low-level exercise. Quantitative analysis was performed with three popular cardiac software tools: 4D-MSPECT from the University of Michigan Medical Center in Ann Arbor, Emory Cardiac Toolbox from Emory University Medical Center in Atlanta and AutoQuant from Cedars-Sinai Medical Center.
The researchers discovered significant differences in normalcy, sensitivity, specificity and accuracy among the three software packages. Normalcy was higher for the Cedars software (91 percent) and 4D-MSPECT (94 percent) versus the Emory software (77 percent). Sensitivity was higher for the Cedars software (87 percent) versus 4D-MSPECT (80 percent). Specificity was higher for Cedars (71 percent) versus Emory (49 percent). And the accuracy was higher for the Cedars software versus both the 4D-MSPECT and Emory software tools. Correlation between the Emory software and the 4D-MSPECT was 0.68, and it was 0.84 between the Cedars software and the 4D-MSPECT.
Based on these differences, the researchers believe that clinicians should consider using the same software for the quantification of coronary artery problems when comparing images over time in the same patient.
MORE FROM PHOTONICS MEDIA