Search
Menu

3D Imaging and AI Advance Robotic Bin Picking and Material Handling

Facebook X LinkedIn Email
Automated robots have been a fixture in manufacturing since the 1960s, serving the same purpose: ensuring human workers are spared repetitive tasks while on the work line.

Over time, these robots have advanced in scale and complexity, performing dangerous or monotonous tasks at impressive rates. This has only been enhanced by AI and machine vision.



A robotic arm uses IDS’ uEye+ XCP cameras and Cambrian Robotics’ AI-based solution to recognize pick points in the designated area. Courtesy of IDS Imaging Development Systems.

Global companies must compete in the face of a changing environment while managing supply chain issues. Relocating production back to the domestic market is an increasingly viable option. But such a move requires not only resilience but also compliance with strict environmental regulations and cost-effective strategies to make domestic manufacturing competitive. Moreover, anyone who wants to ensure the competitiveness of domestic production must overcome personnel bottlenecks.

The answer has been automation through robotics. And AI is increasingly integrated into robotics to map production processes and shorten training.

This is where the British startup Cambrian Robotics Limited comes in, offering an AI-based solution for fast bin picking or pick-and-place, the precise feeding of parts for machines and various work steps in materials handling.

System setup

The system consists of a module for robotic arms, a computing unit with preinstalled intelligent software, and a camera module with two uEye+ XCP cameras from IDS Imaging Development Systems as its eyes.

The cameras are attached to a robotic arm so that they are both pointing at the target object from two slightly different angles.



A close-up of the camera mount placed on top of the robotic arm. Courtesy of IDS Imaging Development Systems..

“Using the stereovision principle, the two IDS cameras provide images of the object scene from different viewing angles,” said Miika Satori, founder and CEO of Cambrian Robotics. “The challenge is to determine the position of the part to be gripped as accurately as possible from these images. This, in turn, is the task of AI.”

The image data is fed to preinstalled intelligent software to identify where the target object is located. It is then further fed to the Cambrian Vision self-learning software, which was developed to predict the part position as well as its pick points of what the robotic arm is tasked to move around or sort. As the software works together on image matching, it eliminates the need for a 3D point cloud.


This is all controlled by an off-the-shelf high-end NVIDIA GPU that, Satori said, is “well suited for AI model inference especially when dealing with vision that requires high computational power.”



The uEye+ XCP camera. Courtesy of IDS Imaging Development Systems..

“In general, GPUs are used for AI model inference because they offer better parallel processing capabilities and efficient matrix operations than CPUs and are faster at a given task,” he said.

Between the AI models, image processing, and image acquisition, pick points can be identified and located quickly and precisely, equating to a configuration time for the system between two and five minutes.

System accuracy

With an accuracy of <1 mm, Cambrian Vision is also much more accurate than competing systems.

“The system reliably detects a wide range of parts, including shiny, reflective, or transparent components, where conventional vision systems often reach their limits,” Satori said. “At the same time, it remains robust against external light conditions.”

The system has an inference speed of <170 ms compared to the >1000 ms of other solutions, which allows for cycle times of 2 to 3 s for bin picking.

The uEye+ XCPs’ USB 5 Gbps capabilities also helped in this manner because it was able to deliver high-resolution imaging in any environment, including low ambient or changing lighting conditions. Because lighting could change how the software perceived pick points, the cameras’ back-side illumination technology and 1/2.5-in., 5.04-MP rolling shutter offered both low-light and high-sensitivity near- infrared performance while simultaneously producing a low pixel noise.

The cameras’ USB3 and vision standard compatibility means that they can integrate into most image processing systems and can be used with any suitable software.

“Depending on customer requirements, we use other IDS cameras in our system. The standardized interface enables rapid deployment of a wide variety of uEye+ models,” Satori said.

Present and future implementation

At present, the 3D-imaging system is being used at companies such as Kao Corporation, which owns brands such as Jergens, where it oversees line work at its plant in Odawara, Japan.

The company said that there is still a growing demand for image processing with AI, leading to the development of cameras with faster data rates and larger sensors packaged in smaller and more affordable form factors.

“Industrial cameras are getting smaller and more affordable. This will enable even more applications,” Satori said. “Our vision is to give robots capabilities on the same level as humans.” ?

Published: March 2024
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
Vision in Actionmachine vision3D visionAIrobotics

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.