Merging TOF Depth and 2D Color Data for 3D Robot Perception

Jul 21, 2021
Facebook X LinkedIn Email
Login  Register
Sponsored by
Basler AG
About This Webinar
Computer vision can make robots "smarter," and it helps with expanding their fields of application. Time-of-flight (TOF) cameras capture precise 3D depth data in real time and offer compact and robust 3D vision solutions. For some robotic applications, it is useful to merge the depth data with the RGB data from a 2D color camera. The result is a point cloud in the object's true colors. This compensates for missing depth information, assists in classifications based on object color, or enables neural networks to be pre-trained on 2D color data. Learn more about 2D and 3D vision-guided robotics.

***This presentation premiered during the 2021 Vision Spectra Conference. For more information on Photonics Media conferences, visit

About the presenter:
Kimberly MatsingerKimberly Matsinger joined Basler in 2018 as an applications engineer after completing her bachelor's degree in physics. In this role, she supported key accounts through their design-in process while providing valuable insight. Matsinger achieved advanced-level certification through A3's Certified Vision Professional program. Starting this year, Kimberly began her current role as product market manager for the Americas. The detailed product knowledge and customer awareness she gained translates to keen market knowledge and ensuring that product development fits customer objectives.
Imaging3D visioncamerasmachine visionroboticsVision Spectraindustrialtime of flightToF Camera
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.