Search
Menu

Dynamic Machine Vision Camera Simulation in Operative Condition

Jul 17, 2024
Facebook X LinkedIn Email
TO VIEW THIS WEBINAR:
Login  Register
About This Webinar
Machine vision camera systems are essential for enabling automation and maintaining quality control across various industries to ensure that processes are efficient, consistent, and meet accuracy and reliability standards. Particularly in machine vision applications, the photo quality of cameras is becoming one of the dominant considerations for consumers, where objects or the camera itself may be in motion within the scene. Accurately modeling and evaluating the performance of these systems under specific operational conditions poses a significant challenge.

This presentation introduces a physics-based simulation framework, facilitating the seamless integration of camera lens systems, 3D scene camera ray tracing analysis, sensor functionality, and post-processing that empowers the simulation of camera systems in real-world scenarios, capturing real-time, fast-moving objects with precision. Through a practical machine vision camera application, Nazari leverages a commercial solution to integrate a camera lens design into a system-level analysis, modeling the camera within a 3D scene while replicating realistic illumination conditions and generating quantifiable irradiance results.

Furthermore, by simulating a timeline scenario, Nazari specifies camera integration, lag time, and the objects’ trajectories in the scene, converting the illuminance/irradiance map into an exposure map. This map represents the power received by the sensor and is used for modeling motion blur and rolling shutter effects. For a more advanced analysis, a CMOS image sensor is integrated to generate the electronic map captured by the sensor. Finally, by considering the camera post-processing, the final image of a camera acquisition is generated from the raw data. In conclusion, the proposed virtual simulation enables a detailed analysis of the camera system's behavior under specific operational conditions, facilitating optimization and improvement of its performance to obtain the best image quality.

*** This presentation premiered during the 2024 Vision Spectra Conference. For more information on Photonics Media conferences and summits, visit events.photonics.com

About the presenter

Mina NazariMina Nazari, Ph.D., is a senior application engineer at Ansys, with a focus on the optical simulation product line in high-tech and autonomous vehicle industries. She received her doctorate in electrical engineering from Boston University in 2019. Before joining Ansys, she worked in the architectural LED lighting field at Luminii, as well as in the LiDAR industry at Aptiv. Nazari has authored several peer-reviewed articles on the optics and photonics field, and has spoken at several symposia and conferences during the past decade.
camerasmachine visionSensors & DetectorsVision Spectra
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.