Search
Menu
BAE Systems Sensor Solutions - Fairchild - Thermal Imaging Solutions 4/24 LB

AI-Based Processing Moves Produce from Harvest to Market

Facebook X LinkedIn Email
A correlation often exists between the intricacies involved in an application and the amount of time spent thinking about a solution for it.

Some applications, however, such as the complexities of processing Brussels sprouts and rutabagas after harvest, are exceptions to this rule — once they are automated.

After images of vegetables are obtained stereoscopically, the AI-based machine vision solution relies on a trained neural network to process the collected image and distill information that tells the physical system where to cut, peel, or destem the vegetable. With Brussels sprouts, the system accounts for positional information about the stem in relation to the bulb. Courtesy of Scorpion Vision.

 
  After images of vegetables are obtained stereoscopically, the AI-based machine vision solution relies on a trained neural network to process the collected image and distill information that tells the physical system where to cut, peel, or destem the vegetable. With Brussels sprouts, the system accounts for positional information about the stem in relation to the bulb. Courtesy of Scorpion Vision.

In England, a vision-guided robotic system is helping to ensure that fresh produce arrives in the marketplace free from defects. The automated solution is being deployed in produce packhouses, the English facilities in which fruits and vegetables are processed after harvest and before they are made available for sale.

The solution relies on AI-based machine vision and 3D stereoscopic imaging. Buoyed by a trained neural network, a robot peels, destems, and cuts vegetables to render even the wonkiest carrot a viable candidate to appear on supermarket shelves.

Beyond operating at speeds faster than a human can achieve and at comparable rates of accuracy, the solution improves yields and minimizes food waste.

A Brussels sprouts problem

In 2015, a client approached Scorpion Vision Ltd. with a Brussels sprouts problem.

To prepare the vegetable for sale, workers in packhouses needed to destem the sprouts as quickly as possible without slicing away any of the bulb. Destemming needs to be precisely performed. The process is also time-consuming for workers, and the environment in which it is performed further impedes the workers’ ability to complete it at high rates of speed. Packhouses are wet and muddy facilities due to protocols of food safety and handling that require vegetables to be cleaned.

“One of our machine builders came to us and asked us what we thought about cutting the stems off Brussels sprouts,” said Paul Wilson, CEO of Scorpion Vision, a Lymington, England-based developer of machine vision systems for factory automation. “It is a favorite story of mine because we think all Brussels sprouts look the same.”

A water blade was used in Scorpion Vision’s original vegetable-cutting solution. The water source and blade are positioned over the belt (bottom), and leaves are discarded in the processing stage. The imaging component (not pictured) is fixed at a predetermined height and distance from the vegetables on the belt. Courtesy of Scorpion Vision.

 
  A water blade was used in Scorpion Vision’s original vegetable-cutting solution. The water source and blade are positioned over the belt (bottom), and leaves are discarded in the processing stage. The imaging component (not pictured) is fixed at a predetermined height and distance from the vegetables on the belt. Courtesy of Scorpion Vision.

The client used a pocketed conveyor that collected sprouts one by one from a feeder. The sprouts were positioned randomly as they were deposited onto the conveyor.

In collaboration with Scorpion, the client next installed a machine vision system that enabled a robot to pick the sprout regardless of the vegetable’s orientation, isolate the stem, and pass the sprout through a water blade to slice away the bottom.

While all Brussels sprouts may look the same, when laid out randomly each one has distinguishing traits that influence the ability of an automated system to carry out the packhouse function.

“Sometimes the stem might be obscured,” Wilson said. “And over 18 months, as we developed this system, seasons changed. And so did the vegetables we were getting into the packhouses.”

English sprouts tend to be round whereas others have an elongated shape. English sprout season ends in winter, at which time wholesalers import from other locations, and therefore the sprouts’ shapes may change. The stems of all varieties jut out from the bulb at various angles, depending on each sprout’s size and shape. Sprouts are also leafier at different stages of the growing season.

An optimal solution needed to be able to do more than localize the stem on identical items. In addition to recognizing shape, the system needed to consider texture and color to find the stem on nonuniform vegetables.

“We needed to incorporate a 3D-imaging and color- detection system,” Wilson said.

Multiple use cases

Scorpion’s multicamera solution uses a 3D structured-light stereoscopic imaging component. The imager, part of the Scorpion 3D Stinger family of machine vision components, features 1.3-MP color cameras with 16-mm, 3-MP lenses and phase-locked loop filters. Mounting fixings are affixed on the backsides of the cameras.

Videology Industrial-Grade Cameras - NEW 2MP Camera 2024 MR

The active-illumination system relies on random pattern projections and near-infrared light sources.

After images are obtained, a height map is created from point clouds based on the vegetable images and the resulting patterns. The automated system differentiates between the two 2D images obtained from the stereoscopic imager and extracts data to create the 3D measurements that reveal the location of stems — in essence, this information tells a robot how to orient the vegetables so that they can be sent down the conveyor to the water blade.

Since the vision component must account for items that are static as well as ones that are in motion, the flexibility of structured stereo imaging is advantageous for the applica- tion. The 2D imagery that a stereoscopic component provides also gives information on color that can help the automated system to distinguish a vegetable’s flesh from its core.

A close-up of leeks that are ready to be processed (left), along with a graphical image representation in the form of a 3D model (right). Courtesy of Scorpion Vision.

 
  A close-up of leeks that are ready to be processed (left), along with a graphical image representation in the form of a 3D model (right). Courtesy of Scorpion Vision.

In a packhouse, six vision-guided six-axis robots could work together, for example, to trim Brussels sprouts at an average rate of one sprout per second per robot, Wilson said. On the front end of the setup, two or three human workers may feed sprouts into hoppers, which feed the sprouts onto the conveyor.

A process that once required about 40 people from front end to back end therefore now requires just two or three people and a sextet of robots to achieve the same deliverable.

The solution worked so well with Brussels sprouts that it is now used in facilities that process lettuce, cabbage, carrots, rutabagas, and leeks. In 2020, during the start of the COVID-19 pandemic, Wilson said an automated system for leek trimming was installed on a farm after management was forced to send home members of its workforce. The system helped the farm to maintain its business operations.

The distance of the cameras in an automated system depends on the vegetable. Processing one variety equates to one use case. For the leeks, the cameras were positioned at a height of 620 mm above the conveyor.

Similarly, the necessary processing steps vary by vegetable. Each type of produce has features and imperfections that are specific to the crop. The shape of a leek, for example, can cause dirt to fully obscure the root, which in turn makes it difficult for a robot to identify the leafier portion of the vegetable to induce an optimal cut from the water blade. Root flies may damage rutabagas, which can prevent the vegetables from making it to the market. It is up to a vision system to account for these factors and make the appropriate determinations.

The position of the cores in lettuce and cabbage, and the risk of losing valuable sections of the head as bunches of the vegetables are cut, proved to be the bottleneck that prompted system designers to turn to deep learning.

With lettuce processing, Wilson said the first iteration of the system was effective about 80% of the time. “That’s when we started using a neural network. We took bad examples — maybe 15 images — and fed them into the network. The performance shot right up.”

Some computational neural networks, such as those used for safety and/or precision monitoring applications, require being trained by hundreds of distinct images.

“We don’t need that many data sets for this application,” Wilson said. “We really need to just give the system a hint for what we were looking for. A little bit of AI can help if you have the right camera in the right light, with the product positioned the right way. The real challenges are in the automation — in the repeatability and the image quality.”

Combined technologies

Faster speeds and better yields were immediate payoffs of the AI-based solution. Additionally, the precision of the imaging component, the repeatability of the robot, and the effectiveness of the water blade combined to ensure that each piece of produce retained its full value as it moved out of the packhouses. The automated solution ensured that the produce was cleaned, peeled, de-cored, trimmed, and/or cut as appropriate, depending on the vegetable, to maximize each vegetable’s retail value.

Now that the system has undergone several iterations of improvement, Wilson said such a design could remain operational for 10 years or more — provided the produce does not change. This quality of system adaptability is especially intriguing to packhouse operators because seasons, as well as members of a workforce, are apt to change. This necessitates frequent, time-consuming sessions of retraining workers who may be employed on a seasonal basis.

The complete system concept is also being implemented in facilities that process seafood. This use case launched before the start of the pandemic, Wilson said, and further refinements are needed to optimize the technology for deshelling and cleaning.

Published: August 2022
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
Vision in ActionImagingmachine visionstereo visionstereoscopiccamerasdeep learningneural networksAI3D visionautomation

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.