Self-driving cars rely on their ability to accurately “see” the road ahead and make adjustments based on what they see. They need to, for instance, react to a pedestrian who steps out from between parked cars, or know to not turn down a road that is unexpectedly closed for construction. As such technology becomes more ubiquitous, there’s a growing need for a better, more efficient way for machines to process visual information.
New research from the University of Pittsburgh will develop a neuromorphic vision system that takes a new approach to capturing visual information that is based on the human brain, benefitting everything from self-driving vehicles to neural prosthetics.
McGowan Institute for Regenerative Medicine affiliated faculty member Ryad Benosman, PhD, professor of ophthalmology at the University of Pittsburgh School of Medicine who holds appointments in electrical engineering and bioengineering, and Feng Xiong, PhD, assistant professor of electrical and computer engineering at the Swanson School of Engineering, received $500,000 from the National Science Foundation (NSF) to conduct this research.
Conventional image sensors record information frame-by-frame, which stores a great deal of redundant data along with that which is useful. This excess data storage occurs because most pixels do not change from frame to frame, like stationary buildings in the background. Inspired by the human brain, the team will develop a neuromorphic vision system driven by the timings of changes in the dynamics of the input signal, instead of the conventional image-based system.
“With existing neuromorphic camera systems, the communication between the camera and the computing system is limited by how much data it is trying to push through, which negates the benefits of the large bandwidth and low power consumption that this camera provides,” says Dr. Xiong. “We will use a spiking neural network with realistic dynamic synapses that will enhance computational abilities, develop brain-inspired machine learning to understand the input, and connect it to a neuromorphic event-based silicon retina for real-time operating vision.”
This system will work more efficiently than existing technology, with orders of magnitude better energy efficiency and bandwidth.
“We believe this work will lead to transformative advances in bio-inspired neuromorphic processing architectures, sensing, with major applications in self-driving vehicles, neural prosthetics, robotics and general artificial intelligence,” says Dr. Benosman.
The grant will run July 1, 2019 to June 30, 2022.