In today’s society, it’s safe to assume that we’re always being watched. Whether through human eyes, computer software or camera lenses, surveillance is ubiquitous. And although there will never be a consensus on its merit or appropriate role, technology is expanding at a rapid rate – and researchers are taking notice. Coming to terms with our age of monitoring will not be easy. But electrical engineers at the University of Washington are making strides to advance our foray into the well-meaning surveillance field with moving cameras that communicate amongst each other to automatically track pedestrians.
An algorithm trains a network of moving and still cameras to identify people, recognize their differences in appearance and follow each individual across multiple camera views. Visual Simultaneous Localization and Mapping (V-SLAM), pedestrian detection, ground plane estimation and kernel-based tracking techniques all are integrated into this one system. It may seem like something out of Orwell’s Nineteen Eighty-Four, but lead researcher Dr. Jenq-Neng Hwang, a professor of electrical engineering at UW, assures the skeptics that this technology isn’t nearly as overbearing as Big Brother.
“Tracking humans automatically across cameras in a three-dimensional space is new,” Hwang said. “As the cameras talk to each other, we are able to describe the real world in a more dynamic sense.”
Photo courtesy of Dr. Jenq-Neng Hwang.
“Our idea is to enable the dynamic visualization of the realistic situation of humans walking on the road and sidewalks, so eventually people can see the animated version of the real-time dynamics of city streets on a platform like Google Earth,” said Hwang.
While tracking a human across cameras of nonoverlapping fields of view, the typical problem arises: A person’s appearance can vary dramatically in each video because of the different perspectives, angles and hues produced by different cameras. To overcome this issue, the researchers built a link between the cameras. An initial recording is taken, allowing the cameras to gather training data by calculating differences in color, texture and angle between a pair of cameras. After this calibration period, the algorithm automatically references the differences between the cameras, picking out the same people across multiple frames and tracking them – all without seeing their faces.
The UW team installed the cameras on cars, robots and drones, but the linking technology can be used anywhere, as long as cloud and wireless connections are available. These visual recordings could be useful for security and surveillance, monitoring for unusual behavior or tracking a moving suspect.
“Around 1997, I was involved in a startup company building digital surveillance systems,” Hwang said. “I realized the importance of intelligent surveillance, such as video analytics, instead of pure recording. Since the data [is] now digital and stored in computers, [we] might as well fully utilize them.”
And for those who want to be free of the prying eyes of technology, Hwang has a clear message.
“No identity nor facial information is linked with the people being tracked,” he said. “We should take advantage of [recording] for better criminal location, city or community planning, business statistics, customers’ behavior analysis and health care.”
Hwang envisions this technology as a means of counting human movement on a large scale. Tracking those in senior homes, keeping an eye on Alzheimer’s patients and seeking out criminals are clearly worthy objectives, the researchers note. For those who cry foul, however, Hwang says there are bigger fish to fry.
“Compared to what Google and Amazon are doing [to our] online behaviors, my techniques are really nothing.”
MORE FROM PHOTONICS MEDIA