Search
Menu
CASTECH INC - New Building the Bridge of Light

Mass-Market Imaging Systems Cut Time, Cost, Size

Facebook X LinkedIn Email
Marie Freebody, Contributing Editor, [email protected]

“Smaller,” “cheaper” and “faster” seem to be the current buzzwords in the imaging industry – and some of the imaging components on the horizon may allow us to enjoy industrial capabilities in our everyday lives.

Liquid lenses, 3-D mapping and cutting camera size are the three key trends in imaging this year. All three areas – at their various stages of maturity – are intrinsically linked, each serving the others in terms of performance, size and cost. And, as with many fields in photonics, as research progresses in one sector, the others can benefit also.

Liquid lenses shrink imaging systems

A liquid lens usually is composed of one or more liquids, which gives it remarkable tunability and flexibility. Scientists have long taken inspiration from nature for new design ideas, but John Rogers at the University of Illinois at Urbana-Champaign and researchers at Northwestern University in Evanston, Ill., have gone beyond the capabilities of nature with their “eyeball” camera. The curvilinear camera combines the advantages of the human eye with those of an expensive single-lens reflex camera and zoom lens.


The “eyeball” camera, built by John Rogers at the University of Illinois at Urbana-Champaign and researchers at Northwestern University, goes beyond nature. Courtesy of John Rogers.


Whereas earlier eyeball cameras had rigid detectors, this one’s simple lens and photodetector are on flexible substrates and use a hydraulic system to change the shape of the substrate, enabling variable zoom. “Our work suggests that the ‘flatland’ world of existing digital imagers and CCD chips may no longer represent a design constraint,” Rogers said.

The camera, which is described in the Proceedings of the National Academy of Sciences (doi:10.1073/pnas.1015440108), uses a tunable liquid lens to provide zoom magnification. The lens has an ultrasimple planoconvex design with a radius of curvature that is adjusted hydraulically. The curved detector is critically important to enabling high-performance imaging with such a simple tunable lens.


A 10-mm-aperture liquid lens with pinned contact lines focuses a laser beam. Liquid lenses can change their shape very quickly, enabling focal-length scans in excess of 30 Hz. Here the 1-methylnaphthalene lens and the surrounding water have fluorescent dyes for visualization. Courtesy of B.A. Malouin and A.H. Hirsa.


“The idea is that curvature in the photodetector array opens up a new engineering design space for digital cameras,” Rogers said. “The result can be a dramatic reduction in the cost, size, weight and complexity of imaging lenses – which often dominate the size, cost and weight of a high-end camera.”

The group has launched a startup company, mc10 in Cambridge, Mass., through which it is pursuing commercialization of stretchable optoelectronics, with hemispherical cameras as one product area. 

“We see the most promising, immediate applications in night vision, where the lenses are particularly difficult, and endoscopes, where size is critically important,” Rogers said. “The hope is to establish the technology in these areas first, and then to move it into broader sectors of commercial use.”

The goal is to provide “studio quality” imaging in small, low-cost devices that could, for example, be incorporated into a cell phone or an inexpensive digital camera.

Due to their inherent flexibility, liquid lenses offer capabilities that have yet to be exploited. Exploring some of these capabilities with the ultimate hope of commercializing them is the goal of a group at Rensselaer Polytechnic Institute (RPI) in Troy, N.Y.

Earlier last year, the team developed liquid pistons with oscillating droplets of ferrofluid that precisely displace a surrounding liquid (typically water) with an embedded lens liquid (in this case, 1-methylnaphthalene). The work was covered on Photonics.com (https://www.photonics.com/wa45462).

The lens is composed of a pair of droplets surrounded by another liquid that is driven by a set of nanoparticle-infused ferrofluid droplets that can vibrate at high frequency, moving the focal distance of the lens under the application of an electromagnetic field. Such lenses may provide a lighter-weight alternative to camera lenses and drivers, and perhaps could be used as replacement eye lenses that can be fine-tuned using magnets.

An important impact that liquid lenses will have on industry is the elimination of lens surface manufacturing, according to Amir H. Hirsa, a professor in RPI’s Department of Mechanical, Aerospace and Nuclear Engineering.


A sighted wheelchair incorporates 3-D mapping to allow a visually impaired wheelchair driver to “feel” and effectively “see” obstacles and navigate past them. Courtesy of Kalevi Hyyppä, Luleå University of Technology.


“Ultimately, we hope that our approach to liquid lenses and similar ones will provide adaptability in a cost-effective, lightweight package,” he said. “For example, we envision in situ assembly of lenses (self-assembly) for integrated devices that utilize lens arrays.” Some researchers are combining liquid lenses with other relatively new technologies. Take, for instance, Jannick Rolland, the Brian J. Thompson (endowed) Professor of Optical Engineering at the University of Rochester in New York, and invited professor at the Institute of Optics in Paris.

Rolland has produced some never-before-seen images by incorporating a liquid lens into optical coherence tomography (OCT) technology. The resulting handheld device can penetrate 1 mm into the skin to provide 3-D images of suspicious moles with the ultimate goal of eliminating the need for magnetic resonance imaging or biopsy.


Left: This handheld device incorporates a liquid lens with OCT technology to penetrate 1 mm deep into the skin and provides 3-D images of suspicious moles in real time. Right: A fingertip can be viewed 1 mm under the surface of the skin at axial and lateral resolution of 2 µm. Courtesy of Jannick Rolland, University of Rochester.


“Fifteen percent of visits to primary care doctors are for the purpose of evaluating skin problems,” Rolland said. “Assessment can be inaccurate, and microscopic evaluation in real time has the potential to significantly improve outcomes.”

The idea was to place a lens based on immiscible fluids, produced by Varioptic SA of Lyon, France, in an otherwise conventional microscope. This was then manufactured by General Optics Asia in Pondicherry, India.

“This was the key to finally identifying and implementing a high-impact application for liquid lenses: having the ability to design a custom microscope that could accommodate this new technology internally, not externally,” Rolland said. “This revolutionary step took a technology with very low intrinsic resolution and suddenly placed it as a key component in solving long-standing challenges in high-resolution imaging in both 2-D and 3-D within the medical and material industries.”

So far, the group has demonstrated in vivo imaging in skin at micron-scale resolution in a potential 8-mm3 volume, and it has recently adapted its technology to imaging the cornea.

The primary challenge – a common one – is bringing down the cost of the entire OCT system: in this case, lowering the cost of a broadband laser. Rolland is in discussions with several companies, including Exalos, Genia Photonics, Micron Optics, NKT Photonics, Superlum, Toptica Photonics and Thorlabs, which are now working to meet the price-point target and bring the dramatic increases Rolland has seen in the laboratory to the clinic.

Apart from examining the skin, the device also can be applied to optical inspection of materials. Here, liquid lenses can help to reduce inspection time, thanks to their quick operation.

Rolland also has ventured into the use of liquid lenses in future 3-D optical head wear for virtual and augmented reality.

“A progression of prototypes developed over the past decade and a half is all converging to what could become our head wear of the future,” she said. “The liquid lens may play a key role in our future head wear. Here, due to weight constraints, it will need to operate stand-alone, and so limiting diameter and speed both are current impediments to adoption. If the size becomes larger than 3 mm in diameter and it remains high speed, it will enable placing 3-D information anywhere in space, as opposed to currently in one location ahead of a user.”

3-D mapping on the way

3-D mapping is already used by the military for scouting terrain and in numerous aerial mapping programs, with similar 3-D imaging technology adopted by the automotive industry for improving driver safety.

Industrial plants employ 3-D imagers on inspection lines, and 3-D vision and mapping are used even in the surgical suite by robots such as the da Vinci systems developed by Intuitive Surgical Inc. of Sunnyvale, Calif. Such systems enable surgeons to perform major procedures extremely precisely and less invasively.

Gentec Electro-Optics Inc   - Measure Your Laser MR

The trend is to build smaller, less costly systems that will bring 3-D mapping into our everyday lives via products such as service robots that can interact with us in play, facilitate routine tasks in hospitals, or even aid the elderly in their homes.

One example is Care-O-bot 3, the robotic home assistant developed by scientists at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Germany.


The robotic home assistant Care-O-bot 3 from Fraunhofer IPA can help in a domestic environment. Images courtesy of photographer Bernd Müller.


“We would like to see the robot assisting older people in their households to increase their independence while allowing them to stay in their homes up to a higher age,” said Jan Fischer of Fraunhofer IPA. “Besides accomplishing daily tasks like preparing and cleaning the table, the robot should also be able to learn new tasks from instruction of its user or some other person.”

Three-dimensional modeling of the environment is crucial for mobile robots if they are to navigate and to interact safely with objects and humans. Recently developed 2.5-D cameras such as the Microsoft Kinect provide point-cloud data of the vicinity of the robot. This information can be used to build point maps of the environment while the robot is moving.

“Typical stereovision algorithms running in reasonable time are not able to create 3-D information when facing objects without texture,” Fischer said. “By integrating 2.5-D camera devices, these 3-D gaps are filled with meaningful 3-D data and further enhance the robustness of object detection.”

A major challenge on the hardware side is the limited accuracy of depth sensors, along with their high price.

“The introduction of Microsoft’s Kinect sensor presents a significant step toward low-cost 3-D perception,” Fischer said. “However, in terms of precision, it is still not able to compete with standard stereo camera systems.

“Imaging is and will remain a vital component within the area of robotics. The trend definitely moves toward low-cost RGB-D sensors that provide both 2.5-D range data and color information for each pixel.

“With the development of feasible algorithms, their application is quite manifold; e.g., object detection, localization, 3-D mapping, gesture recognition, human motion recognition and many more.”

3-D mapping enables the blind to “see”

An electric wheelchair that uses a laser scanner to create a 3-D map of its surroundings and transfers the information to a haptic robot could help blind wheelchair users navigate the world more easily.

Developed at Luleå University of Technology (LTU) in Sweden (https://www.photonics.com/wa47162), the wheelchair enables a visually impaired driver to maneuver around obstacles, and its developers believe that it can be manufactured for consumers in about five years.

“The wheelchair has a joystick for steering and a haptic robot that acts as a virtual white cane,” said professor Kalevi Hyyppä of LTU. “With the help of a laser scanner, a simplified 3-D map is created of the wheelchair surroundings.”

The laser scanner uses a time-of-flight technique to produce a 3-D map that is then transferred to the haptic robot so that the user can “feel” and effectively “see” obstacles such as open doors or oncoming people, and navigate past them.

The group hopes to miniaturize the sensor to enable haptic interfaces to be worn. But this is no easy task, Hyyppä admitted.

“The laser beam that sweeps in front of the wheelchair hits only objects which are a certain height. It does not have the capacity to see things that are higher or lower than that height,” he said. “Present 3-D cameras do not have enough performance concerning signal-to-noise ratio, range and field of view.”

In similar research, engineers at the University of Southern California (USC), Los Angeles, have developed software that can help the visually impaired navigate complex environments. The user wears a head-mounted camera that is connected to a PC, which uses simultaneous localization and mapping software to build maps of the environment and to pick out a safe path through it.

As reported in the August 2011 issue of Photonics Spectra (pp. 25-26), the route is conveyed to the user through a vibrating guide vest. Research has since moved on. Now, instead of a camera mounted on the head, a pair of glasses can be used.


The visually impaired could benefit from this image processing system developed by engineers at the University of Southern California. A PC connected to two cameras mounted on a pair of glasses determines the best route and transmits this information to the user via a vibrating guide vest. Courtesy of the University of Southern California.

“The system is composed of a set of two cameras attached to a person’s head, either mounted on a helmet, or as a pair of glasses,” said USC’s professor Gerard Medioni. “The computer system fuses these two image streams to produce a 3-D view of the world. As the person moves, the systems register the partial 3-D views into a unified 3-D map.”

This step has been dubbed SLAM (simultaneous localization and mapping). Based on this map, the system classifies areas as safe for traversal, or as obstacles.

The ultimate goal for Medioni’s group is to provide the system at a low cost for all blind users. So far, he has demonstrated a first prototype and conducted initial tests with blind patients at the Braille Institute in Los Angeles.

As with Hyyppä’s wheelchair, significant work remains to be done before widespread production and use can become a reality. But Medioni believes that the imaging field is exploding, with the barrier to entry becoming lower every day.

“Cameras are ubiquitous, computing power keeps growing, and algorithms are being made available (e.g., OpenCV),” he said. “This trend leads to robust solutions, which means more uses of the technology in more domains.”

Cameras out of the salt shaker

Cameras are getting smaller – down to as small as a grain of salt. One such camera was built for endoscopy applications by a German collaboration between Fraunhofer Institute for Reliability and Microintegration IZM and Awaiba GmbH (https://www.photonics.com/wa46485).


The two main potentials of wafer-level cameras are size and cost, according to Martin Wilke of Fraunhofer IZM. This low-cost CMOS camera, measuring only 1 mm3, makes disposable endoscopes feasible. Courtesy of Awaiba GmbH.


The prototypes were reportedly so inexpensive that they could be disposed of after one use, avoiding otherwise necessary cleaning. The camera is fabricated using through-silicon via technology to enable complete wafer-scale integration of both the sensor and the imaging optics. The result is a low-cost CMOS camera only 1 cubic millimeter in size.

“The two main potentials of wafer-level cameras [WLCs] are size and cost. At the moment, size is the more important reason why WLC is interesting, especially in the medical sector,” said Martin Wilke of Fraunhofer IZM. “When all hurdles for WLC packaging are overcome, the production can be much less expensive than the conventional way of packaging. This can then bring micro cameras into applications with higher volume.”

Striking visual reminders of the trend toward smaller cameras are pigeons fitted with tiny head-cameras to help Harvard University researchers figure out the best way to navigate through difficult environments. This data could be used as a model for autopilot technology (https://www.photonics.com/wa47591).


A custom-made telemetry backpack collects flight data of a pigeon maneuvering through an obstacle course. The data, including videos from the head-camera, is used to develop vision-based autopilot technologies. Courtesy of Huai-Ti Lin, Harvard University.


“Our research should inform the industry that there is a growing market for compact cameras in experimental biology,” said Dr. Huai-Ti Lin at Harvard. “Adding a device on any flying animal could degrade the flight performance due to the added weight and drag. The size, weight and robustness of a device are all extremely important considerations for animal studies, especially out in the field.”

Small wireless cameras give biologists the opportunity for unprecedented observation of animal behavior; in this case, Lin said, the pigeon head-camera videos give us the closest experience next to embodying a bird, from which we can learn so much.

“Birds have exceptional ability to stabilize vision in flight. This is done primarily by stabilizing the head using both the inertial sensory input and visual input,” he said. “In many modern photographic applications, vibration is a big issue. I believe there is a lot we can learn from birds about visual-inertial sensory integration that can help with photographic technologies.”

Published: January 2012
Glossary
photodetector
A photodetector, also known as a photosensor or photodiode, is a device that detects and converts light into an electrical signal. Photodetectors are widely used in various applications, ranging from simple light sensing to more complex tasks such as imaging and communication. Key features and principles of photodetectors include: Light sensing: The primary function of a photodetector is to sense or detect light. When photons (particles of light) strike the active area of the photodetector,...
3-D mappingBiophotonicscamerasCCDCMOScurvilinear cameradefenseeyeball cameraFeatureshydraulic systemImagingin vivo imagingindustriallensesMicroscopyOCT technologyOpticsphotodetectorplanoconvexreal timescansSensors & DetectorsWafers

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.