Close

Search

Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Hybrid Comparative Solution Boosts Multi-Object Tracking

Facebook Twitter LinkedIn Email
A team at the Gwangju Institute of Science and Technology (GIST) in Korea, led by Moongu Jeon, implemented a technique called deep temporal appearance matching association, or Deep-TAM, to overcome short-term occlusion, which affects the ability of computer vision systems to simultaneously track objects. The framework was shown to achieve high performance without sacrificing computational speed.

Algorithms that can simultaneously track multiple objects are essential to applications that range from autonomous driving to advanced public surveillance. However, it is difficult for computers to discriminate between detected objects based on their appearance.

One example of a function that remains difficult for computers is object tracking, which involves recognizing persistent objects in video footage and tracking their movements. While computers can simultaneously track more objects than humans, they usually fail to discriminate the appearance of different objects.

This, in turn, can lead the algorithm to mix up objects in a scene and ultimately produce incorrect tracking results.

Conventional tracking determines object trajectories by associating a bounding box to each detected object and establishing geometric constraints. The difficulty in this approach is in accurately matching previously tracked objects with objects detected in the current frame. Differentiating detected objects based on features like color usually fails because of changes in lighting condition and occlusions.


The researchers’ solution focused on enabling the tracking model to accurately extract the known features of detected objects and compare them not only with those of other objects in the frame, but also with a recorded history of known features.

To this end, the researchers combined joint-inference neural networks (JI-Nets) with long-short-term-memory networks (LSTMs). LSTMs help to associate stored appearances with those in the current frame; JI-Nets allow for comparing the appearances of two detected objects simultaneously from scratch. Using historical appearances in this way allowed the algorithm to overcome short-term occlusions of the tracked objects.

“Compared to conventional methods that preextract features from each object independently, the proposed joint-inference method exhibited better accuracy in public surveillance tasks, namely pedestrian tracking,” Jeon said.

The researchers also offset a main drawback of deep learning low speed by adopting indexing-based GPU parallelization to reduce computing times. Tests on public surveillance data sets confirmed that the proposed tracking framework offers state-of-the-art accuracy and is therefore ready for deployment.

Vision-Spectra.com
Aug 2021
GLOSSARY
tracking
1. The process of following an object's movement; accomplished by focusing a radar beam on the reticle of an optical system on the object and plotting its bearing and distance at specific intervals. 2. In display technology, use of a light pen to move an object across a display screen.
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
image comparison
A method used in imaging to detect subtle differences between two apparently similar pictures. It can be achieved by superimposing the negative of one photograph over a contact print of another, by projecting or displaying the images side by side, or by displaying the images in rapid sequence.
Research & TechnologyeducationAsia PacificGwangju Institute of Science and TechnologyGISTcomputer visionimage processingimage trackingtrackingmachine visionalgorithmsimaging algorithmsobject detectionneural networkssurveillancesurveillance and imaging systemssurveillance and navigationAutonomous drivingimage comparison

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.