A multi-institutional group aims to create a machine vision system that approaches the cognitive abilities of the human brain. Such a system would enable computers to not only record images, but also to understand visual content at up to a thousand times the efficiency of current technologies. By developing chips that can function more like the human brain, researchers can achieve a number of significant breakthroughs in understanding brain function from the work of single neurons all the way up to a more holistic view of the brain as a system, according to bioengineering professor Dr. Gert Cauwenberghs of Jacobs School of Engineering at the University of California, San Diego, in La Jolla. For example, building chips that model different aspects of brain function, such as how the brain processes visual information, gives researchers a more robust tool to understand where problems arise that contribute to disease or neurological disorders. A micrograph of the UCSD computer chip that emulates how the brain processes visual information. The human vision system understands and interprets complex scenes for a wide range of visual tasks in real time while consuming less than 20 W of power. Smart machine vision systems that understand and interact with their environments could have a profound impact on society, including applications in aids for the visually impaired, driver assistance in automobiles and augmented reality systems. Although several machine vision systems today can successfully perform one or a few human tasks – such as detecting human faces in point-and-shoot cameras – they are still limited in their ability to perform a wide range of visual tasks, to operate in complex, cluttered environments or to provide reasoning for their decisions. In contrast, the visual cortex in mammals excels in a broad variety of goal-oriented cognitive tasks and is at least three orders of magnitude more energy-efficient than customized state-of-the-art machine vision systems. The five-year, $10 million National Science Foundation (NSF) project Visual Cortex on Silicon is one of two awards announced by the NSF and funded through its Expeditions in Computing program. Dr. Vijaykrishnan Narayanan, a professor of computer science and engineering and electrical engineering at Pennsylvania State University, is the project’s lead investigator. Other collaborating institutions are the University of Southern California (USC); Stanford University in California; York College of Pennsylvania; the University of California, Los Angeles; the University of Pittsburgh; and MIT. A neuromorphic circuit array models computation and communication across large-scale networks in the visual cortex to understand visual processing in the brain. Each chip in the array mimics the activity of 65,000 neurons that make 65 million synaptic connections in memory. As a result, the research team can change these connections by changing entries in the memory tables, allowing great detail and flexibility in studying the circuit dynamics of cortical vision. Images courtesy of UCSD Jacobs School of Engineering. “We have already been collaborating with colleagues at USC and MIT in developing smart camera systems for the past five years, and demonstrated vision systems that operate with two to three orders of better energy efficiency than existing approaches,” Narayanan said. “With this expedition, we are aiming to leapfrog the intelligence of these vision systems to approach human cognitive capabilities, while being extremely energy-efficient and user-friendly.” The expedition seeks to understand the fundamental mechanisms used in the visual cortex, with the hope of enabling the design of new vision algorithms and hardware fabrics that can improve power, speed, flexibility and recognition accuracies relative to existing machine vision systems. The interdisciplinary effort covers several domains, including neuroscience, computer vision, hardware design, new device technology, human-computer interface, data analytics and privacy. The project offers a unique collaborative opportunity with global experts in neuroscience, computer science, nanoengineering and physics, Cauwenberghs said. He and his team are currently developing computer chips that emulate how the brain processes visual information. “The brain is the gold standard for computing,” Cauwenberghs said, adding that computers work completely different from that of the brain, acting as passive processors of information and problems using sequential logic. The human brain, by comparison, processes information by sorting through complex input from the world and extracting knowledge without direction.