Hank Hogan, firstname.lastname@example.org
CAMBRIDGE, Mass. – Put away those phones, and forget the laptop. You won’t need them in the future if an MIT project succeeds. Using a camera, a projector and a mobile computing device, researchers there have turned the whole world into a screen – and fingers waving in the air into an input device.
A map projected on a nearby wall allows inter-action with both the digital and the real worlds.
Graduate student Pranav Mistry is working on the project, dubbed “sixthsense.” It links people’s interaction with digital information to their interaction with the physical world, thereby blurring the distinction between the two and turning ordinary reality into something more. “It is possible to augment the environment around us with visual information,” Mistry said.
This is already being done to some extent when people whip out a phone to verify a fact or to figure out where they are. The MIT group has taken that concept to a new level. The prototype consists of a pocket projector, a mirror and a camera, all of which are coupled together in a pendant worn by the user. The projector and camera are connected to a mobile computing device that provides the brains for the system. Working in concert, these components bridge the gap between the digital and the real worlds. For instance, when the user needs a keypad to dial a phone number, the projector displays it on a nearby object, such as the palm of the hand. The user can dial by touching the projected numbers, with the camera picking up the visual clues and the pocket computer interpreting the incoming stream of images. This visual tracking is made easier because users of the prototype wear colored markers worn on the fingers.
Augmented reality is being used to read a newspaper. A system projects a video feed onto a nearby surface, in this case a newspaper.
Other applications could involve projecting information or a video feed onto a newspaper, thus providing more information about a specific item of interest. On command, the system also could supply sales data about a book. What is more, the projected data could change with the object. Thus, as the wearer points the projector toward a newspaper, book or even coffee cup, what is projected would align with the surface of the object.
A key difference between sixthsense and other augmented reality systems, Mistry explained, is the use of hand gestures. Framing a scene with the thumb and forefinger, for example, causes the system to snap an image. Unlike other attempts at augmented reality, sixthsense does not require the use of special glasses. Because it projects information onto available surfaces, it enables collaboration between people, with each seeing and interacting with the overlaid information.
Another key difference is the cost of the prototype, estimated to be approximately $350. The low cost is due, in part, to the use of off-the-shelf components. Turning the prototype into a commercial device would require making it more robust and compact. It also might be necessary to improve the image processing so that finger gloves need not be worn and so that more gestures can be interpreted and acted upon.
Augmented reality is made possible thanks to an off-the-shelf camera, a projector, a mirror and a smart phone (not visible). The location of the fingers tells the system what to do. All images courtesy of Pranav Mistry, MIT Media Lab.
Such improvements are being investigated. At the same time, there has been a lot of interest in the concept and prototype, with plans already in the works to make it into a product. However, don’t look for it to appear in time for this year’s Christmas shopping.
“I think it will take a year or two to launch sixthsense as a consumer product,” Mistry said.