PHOENIX, Ariz., March 13, 2009 – This year’s Vision Show is offering an inclusive, hands-on look at practical training for machine vision and imaging components and solutions that can be put to practice.
Held at the Phoenix Convention Center in Arizona from March 31–April 2, 2009, this year’s show will feature six tutorials and seven sessions that will be taught by industry experts offering highly valuable training from both the vision and automation sides of the system.
The Phoenix Convention Center in Arizona.
Phoenix was chosen for the 2009 show because of its high concentration of semiconductor manufacturing companies, federal defense contracts held and because of significant aerospace exports, an industry increasingly relevant to machine vision and imaging. Because machine vision technologies are also prevalent in several surrounding states and into the Midwest and the South, a large number of attendees are expected at this year’s show.
The tutorials kick off with “The Fundamentals of Machine Vision,” by David Dechow of Aptúra Machine Vision Solutions. Attendees will learn all the basics, including how images are captured and transferred to the computer, the principles of lighting, and the common processing algorithms used by machine vision systems. Dechow will demonstrate how to successfully implement machine vision and how to avoid common pitfalls during the implementation, launch and production phases. This is an ideal training course for people new to machine vision as well as a great refresher course for anyone with machine vision responsibilities.
Instructors Daryl Martin of Advanced illumination and Stuart Singer of Schneider Optics Inc., will present “Beginning Lighting & Optics for Machine Vision.” This course focuses on lighting geometry and the basics of illumination optics. Learn how and where light fits into the energy spectrum, review components of the machine vision front end to understand how they impact the images acquired by the system, and learn how to define the fundamental parameters of optical layout. Get a real world understanding of how to balance your system’s field of view, resolution working distance, and depth of field.
“Advanced Lighting & Optics for Machine Vision,” will be presented by Jon Chouinard of Microscan Systems Inc. and Gregory Hollows of Edmund Optics. This tutorial is designed for the engineering professional and concentrates on real world techniques for putting together illumination and optic systems that work. Attendees will learn how to select proper illumination wavelengths and how to deal with complex part surface geometries. Other topics will include lens/component selection, optomechanical layout, including system bends, illumination integration, controlling back reflections and mounting techniques. Prior attendance at a “Basic Lighting & Optics” course is encouraged, but not required.
David Dechow of Aptúra Machine Vision Solutions will then present, “Integrating Machine vision for Automation Systems,” offering solutions for integrating machine vision and incorporating it into an automation system. This tutorial will discuss application analysis, project specification and implementation of components for a machine vision system. It will also address integration of a machine vision system into a full automation system including network communications. Targeted for attendees with a basic understanding of machine vision, optics and lighting, the session will interest anyone seeking a deeper insight into machine vision systems integration as well as those who need to network the system into existing automation.
Dalsa’s Ben Dawson will present the tutorial titled, “Fundamentals and Applications of Color Machine Vision.” According to Dawson, color is important for a growing number of machine vision applications in food, pharmaceutical, automotive and other markets. This course will start with how color images are formed and then review aspects of human color vision that are important in designing a color machine vision system. Common color algorithms will be discussed as well as the components and design of a color machine vision. The course will finish with "case studies" of color machine vision applications.
A two-part tutorial titled, “Designing Real Time High Speed Systems” will be presented by Perry West of Automated Vision Systems. In part one of this course you learn how latency and determinism relates to high-speed and to real-time performance. You also learn how the different types of vision system components affect the latency of the vision system. The topics cover components for image acquisition including triggering, camera exposure, and image transfer as well as different approaches to image processing, including processing architecture/hardware, operating system, application software, and resynchronization. The second part of the tutorial will explore the performance parameter you know to quantify speed and real-time performance, more techniques to manage latency and determinism to improve the performance of your vision system design, and a methodology for guiding the design of a vision system. Three example designs illustrate how you use these parameters and techniques to achieve design performance goals.
This year’s Vision Show will also feature several sessions, including “Advances in LED Lighting and Lighting Techniques,” which will discuss advance in light sources, lighting control, spectral filtering and other lighting techniques that help you get the most from your images.
Learn how advancements in 3-D camera technology are enabling new solutions for more applications than ever before at the “3-D Machine Vision Solutions” session. Attendees can discover how to use 3-D laser sensors to enhance machine vision solutions. This session will provide real application techniques you can use in electronics, pharmaceutical, food & beverage, aerospace, automotive and many other industries.
To learn about infrared and x-ray techniques, visit the “Machine Vision Solutions using Non-Visible Imaging” session. Hear how non-visible imaging methods offer unique benefits and find out if these solutions are right for your specific needs.
For an overview of what lies ahead for GigE Vision, Camera Link, FireWire and USB camera interfaces, attend the session called, “Future Directions for Camera Interface Technologies.” Hear about how the developments in camera interface technologies will positively affect your machine vision system.
The session called, “Machine Vision for Traceability, Error Proofing and Part Marking” will touch on the advances in image processing technology. Learn about how direct part mark identification and 2D data matrix symbology can efficiently handle applications for traceability, error proofing, and process control.
“Advances in Smart Cameras,” will outline which applications are best suited to smart camera solutions versus PC-based solutions and will show you how to decide which approach best meets your needs. Smart cameras offer many benefits in machine vision applications, and with this session you can learn how to take advantage of these easy-to-use technologies for your specific needs.
To explore how algorithms for geometric matching, classification, segmentation, edge detection and OCR/OCV work and how to use them in your specific applications, attend the session titled, “OCR/OCV, Pattern Matching and Edge Detection Techniques for Machine Vision.”
Besides The Vision Show’s training courses and sessions, which offer up-to-date technology, components and solutions, there will be new technologies, applications, products and product demonstrations, as well as expert advice from product engineers and leading vendors.
Dubbed North America’s leading showcase of machine vision and imaging components and solutions, this show is ideal for users of machine vision and imaging technologies, system integrators, automation integrators, machine builders, OEMs, and for companies that want to learn how machine vision can be beneficial to them.
For more information, visit: www.machinevisiononline.org