Rebecca C. Jernigan, firstname.lastname@example.org
COLUMBUS, Ohio – Humans in general are pretty good at identifying people and things that don’t belong in a given scene. That’s the theory behind the neighborhood watch concept – that neighbors can pick out people who are acting in an unusual manner.
This model doesn’t work as well in busy environments, however, where the amount of information to sort through increases exponentially. Security personnel in urban areas, airports and other fast-moving environments must actively monitor a large number of video feeds simultaneously in their attempt to keep everyone safe, but with so much information to process, it is easy to miss a sign of trouble.
A wide-angle panorama generated by the surveillance system shows the Ohio State University campus. Images courtesy of James W. Davis.
A smart surveillance system currently under development at Ohio State University may provide the solution. Using detailed software algorithms, professor James W. Davis and doctoral student Karthik Sankaranarayanan are creating a system of surveillance cameras that could assist security personnel with this overwhelming task. Eventually, the approach could identify lost people on a college campus or in a neighborhood as well as individuals in airports or large cities who are acting suspicious. It could track a person, determine his or her exact location and draw security’s attention to the area where there is a problem.
The researchers have written three algorithms for the project. The first uses commercially available pan-tilt-zoom cameras to create a seamless 360° wide-angle video panorama of a street scene. When the picture is displayed on a computer monitor, a user can click anywhere within the image to view that area.
A panorama would be mapped onto a high-resolution aerial image of the area being observed to give the close-up a larger context. Pictured is an aerial view of the Ohio State University campus.
The second algorithm maps locations within the panorama to the corresponding areas on an aerial map of the scene to provide the latitude and longitude of each pixel in the image.
Finally, the third software component uses the information from the first two to calculate the exact location of the selected person and can be used to instruct the cameras to pan and tilt to automatically follow the subject of interest.
The development work isn’t done yet, however. The scientists are now researching methods that would allow the camera system to “learn” which behaviors are common and which are unusual, enabling the computer to do almost all of the identification and tracking without assistance.
A selected target can be tracked throughout the scene, enabling law enforcement or security personnel to keep an eye on the subject. Eventually, the camera may have the capacity to select targets without user input.
Davis said that the panorama and mapping components are currently being tested on a large scale and should be available for use within a year. Although the system does not yet have the ability to identify unusual behavior, the researchers hope to have developed an algorithm within three to five years.