SEATTLE, Oct. 19, 2015 — A consumer-grade multispectral camera could help users find the best avocado at the grocery store or allow video games to distinguish between players by the features of their hands.
Under development by the University of Washington and Microsoft Research, the HyperCam uses both visible and near-infrared light to peer beneath the surface and reveal unseen details.
Multispectral and hyperspectral cameras used in industrial applications typically cost several thousands to tens of thousands of dollars, but HyperCam costs about $800 to build. As an add-on to a smartphone camera, the cost could go as low as $50, the developers said.
HyperCam is a low-cost multispectral camera developed by the University of Washington and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye. Courtesy of the University of Washington.
HyperCam illuminates a scene with 17 wavelengths. Software analyzes the resulting images to present the user with the most useful information.
"It mines all the different possible images, and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different," said Mayank Goel, a University of Washington doctoral student and Microsoft Research graduate fellow.
Compared to an image taken with a normal camera (top), a HyperCam image (bottom) reveals detailed vein and skin texture patterns unique to each individual. Courtesy of the University of Washington.
When HyperCam captured images of a person's hand, for instance, it revealed detailed vein and skin texture patterns unique to that individual. This could aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.
To test HyperCam's usefulness as a biometric tool, the developers imaged the hands of 25 different users, and the system was able differentiate between them with 99 percent accuracy.
In another test, the team took multispectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.
"It's not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone," said University of Washington professor Shwetak Patel. "With this kind of camera, you could go to the grocery store and know what produce to pick by looking underneath the skin and seeing if there's anything wrong inside. It's like having a food safety app in your pocket."
One challenge is that the technology doesn't work particularly well in bright light, Goel said. Next research steps will include addressing that problem and making the camera small enough to be incorporated into mobile devices, he said.
The team described its work at the UbiComp 2015 conference last month in Osaka, Japan.