Search
Menu
Meadowlark Optics - SEE WHAT

Vision helps robots help you

Facebook X LinkedIn Email
Caren B. Les, [email protected]

The robot extracts information from the videos on human motion and poses, and how these relate to objects and activities. Based on that information, it computes the probability for future human action in similar scenarios.

Roomba may keep your floors spiffy, but it’s got nothing on Rosie, the household robot from The Jetsons – she took care of a wide range of domestic tasks, leaving the family free to explore the future.


After anticipating that a person needs to place food in the refrigerator, an assistive robot opens the door.


But today’s robots are starting to catch up. A new system at Cornell University in Ithaca, N.Y., can help a robot predict when to assist with mundane human tasks such as filling an empty beer glass and opening a refrigerator door.

Perception is the key. “We noticed that robotic perception is one of the important missing pieces for a robot working in unstructured human environments such as homes, offices and warehouses,” said Ashutosh Saxena, assistant professor of computer science. “We often take our human vision for granted, but even for very simple tasks such as grasping or observing people, robots needed significant advances in this area.”

To come to the aid of a person in the kitchen, Cornell’s robot “sees” a situation with a Microsoft Kinect 3-D camera and “refers” to its database of 3-D videos to help it predict when it can appropriately execute an assistive task.

Hamamatsu Corp. - Earth Innovations MR 2/24

For their work, Saxena and PhD student Hema Koppula showed their robot about 120 videos of four people performing daily tasks. The robot extracts information from the videos on human motion and poses, and how these relate to objects and activities. Based on that information, it computes the probability for future human action in similar scenarios.

Basically, the robot computes a “belief” on what may be happening in a scene, Saxena said. As more data comes in over time, the robot changes its belief on the person’s future actions. The beliefs are shown as heat maps in the video, together with trajectories. The robot executes an assistive motion when it “feels” secure enough in its belief.

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent for three seconds, and 57 percent for 10 seconds.

The robot is limited by the resolution of the Kinect sensor, which doesn’t let it extract subtleties of motion, Saxena said. The new XBox One is high definition and will also give far richer data, he added.

The research was presented at the International Conference on Machine Learning in Atlanta and the Robotics: Science and Systems conference in Berlin.

A full-fledged robot like Rosie may take more time to develop, Saxena said.

Published: August 2013
AmericasAshutosh Saxenaassistive roboticscamerasCaren B. LesCornell Universitydomestic robotsHema KoppulaImagingKinect sensorLighter SideMicrosoft Kinect 3-DNew Yorkrobot visionrobotic perceptionroboticsrobotsSensors & DetectorsXbox One

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.