Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


New Robot Comes to Its Senses

JULIA GERMAINE, julia.germaine@photonics.com

Robot design is often anthropocentric — the artificial agents have limbs, features and form factors analagous to our own. Surely the human body is a well-designed system, but now researchers are improving robot sensitivity by moving beyond the limits of human morphology.

A team from Carnegie Mellon University (CMU) in Pittsburgh has added a vision system to a robot’s hand, allowing it to rapidly model its environment in 3D, as well as locate its hand within that environment. As professor Siddhartha Srinivasa said, robots “usually have heads that consist of a stick with a camera on it,” and can’t bend over like a person to better view a space. Rather than making major structural changes, the hand-camera expands the robot’s “senses” with a bit of hardware and software.

Robotics Ph.D. student Matthew Klingensmith described the goal of the research to Photonics Media.


A team from Carnegie Mellon University has added a vision system to a robot’s hand, allowing it to rapidly model its environment in 3D, as well as locate its hand within that environment. Bottom right, A Kinova MICO2 robotic manipulator. Courtesy of CMU Personal Robotics Lab.

“Accurate robot arms tend to be very rigid and heavy,” Klingensmith said. “This makes them dangerous to operate around humans and limits their use to controlled environments. Safer robots designed to work around people — like the Kinova MICO we used in this work — tend to be less accurate and more ‘floppy.’ To make up for these inaccuracies, we use machine vision to correct for the robot’s joint-angle error.”

The researchers used a popular algorithm for mobile robots called simultaneous localization and mapping (SLAM), in which the robot pieces together input from sensors such as cameras, laser radars and wheel odometry to create a 3D map of the new environment and determine its own relative location.

The researchers demonstrated their articulated robot motion for SLAM (ARM-SLAM) using a small-depth camera — including the Structure Sensor from Occipital and a uEye XS RGB sensor from IDS — attached to a lightweight manipulator arm, from the Kinova MICO series. The team made a 3D model of a bookshelf, and found ARM-SLAM produced reconstructions equivalent to or better than other mapping techniques. A Kinova MICO2 robotic manipulator.

Klingensmith told Photonics Media that primary applications might include automatic calibration and tracking of a robot arm without fiducial, higher-quality 3D scans using “noisy” robot arms, and real-time visual servoing against the 3D map. His current work involves extending this system to 2D RGB sensors and calibrating additional parameters of the robot such as its camera extrinsics, joint angle offsets and the motion of its base.

Combining robotics, a vision system and complex algorithms, the collaborative effort to produce ARM-SLAM involved the personal robotics lab and field robotics center at CMU, and the Dyson Robotics Lab at Imperial College London. An open-source reference version of the system will be available soon.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media