Search
Menu

Optics Accelerates Deep Learning Computations on Smart Devices

Facebook X LinkedIn Email
MIT researchers have created a method for computing directly on smart home devices that drastically reduced the latency that may cause such devices to be delayed in offering a response to a command or an answer to a question. One reason for this delay is that the connected devices don’t have enough memory or power to store and run the enormous machine learning models necessary for the device to understand a question. Instead, the question is sent to a data center that can be hundreds of miles away where an answer is computed and sent back to the device.

The MIT researchers’ technique shifted the memory-intensive steps of running a machine learning model to a central server where components of the model are encoded onto lightwaves. The waves are transmitted to a connected device using fiber optics, which enables large quantities of data to be sent at high speeds through a network. The receiver then used a simple optical device that rapidly performed computations using the parts of a model carried by those lightwaves.

The technique led to a more than hundredfold improvement in energy efficiency when compared to other methods. It could also improve security, since a user’s data does not need to be transferred to a central location for computation. 

Further, the method could enable a self-driving car to make decisions in real time while using just a tiny percentage of the energy currently required by power-hungry computers. It could also be used for live video processing over cellular networks, or even enable high-speed image classification on a spacecraft millions of miles from Earth.

Senior author Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), as well as a member of the MIT Research Laboratory of Electronics, said, “Every time you want to run a neural network, you have to run the program, and how fast you can run the program depends on how fast you can pipe the program in from memory. Our pipe is massive — it corresponds to sending a full feature-length movie over the internet every millisecond or so. That is how fast data comes into our system. And it can compute as fast as that.” 
A smart tranceiver uses silicon photonics technology to dramatically increase one of the most memory-intensive steps of running a machine learning model. This can enable an edge device, such as a smart home speaker, to perform computations with more than a hundredfold improvement in energy efficiency. Courtesy of Alex Sludds.

A smart transceiver uses silicon photonics technology to dramatically increase one of the most memory-intensive steps of running a machine learning model. This can enable an edge device, such as a smart home speaker, to perform computations with more than a hundredfold improvement in energy efficiency. Courtesy of Alexander Sludds.
According to lead author and EECS grad student Alexander Sludds, the process of fetching data — the “weights” of the neural network, in this case — from memory and moving it to the parts of a computer that do the actual computation is one of the biggest limiting factors to speed and energy. “So, our thought was, why don’t we take all that heavy lifting — the process of fetching billions of weights from memory — move it away from the edge device and put it someplace where we have abundant access to power and memory, which gives us the ability to fetch those weights quickly?” Sludds said.


To address the data retrieval process, the team developed and deployed a neural network. Neural networks can contain billions of weight parameters, which are numeric values that transform input data as it is processed. These weights must be stored in memory. At the same time, the data transformation process involves billions of computations, which require a great deal of power to perform.

The neural network architecture that the team developed, Netcast, involves storing weights in a central server connected to a smart transceiver. The smart transceiver, a thumb-size chip that can receive and transmit data, uses silicon photonics to fetch trillions of weights from memory each second. Weights are received as electrical signals and subsequently encoded onto lightwaves. Since the weight data is encoded as bits — 1s and 0s — the transceiver converts them by switching lasers. A laser is turned on for a 1 and off for a 0. It combines these lightwaves and then periodically transfers them through a fiber optic network so a client device doesn’t need to query the server to receive them. 

Once the lightwaves arrived at the client device, a broadband Mach-Zehnder modulator used them to perform superfast analog computation. This involved encoding input data from the device, such as sensor information, onto the weights. Then, it sent each individual wavelength to a receiver that detected the light and measured the result of the computation.

The researchers devised a way to set the modulator to do trillions of multiplications per second. This vastly increased the speed of computation on the device while using only a tiny amount of power.

“In order to make something faster, you need to make it more energy efficient,” Sludds said. "But there is a trade-off. We’ve built a system that can operate with about a milliwatt of power but still do trillions of multiplications per second. In terms of both speed and energy efficiency, that is a gain of orders of magnitude.” 

The researchers tested the architecture by sending weights over an 86-km fiber connecting their lab to MIT Lincoln Laboratory. Netcast enabled machine learning with high accuracy — 98.7% for image classification and 98.8% for digit recognition — at rapid speeds.
 
Now, the researchers want to iterate on the smart transceiver chip to achieve even better performance. They also want to miniaturize the receiver, which is currently the size of a shoebox, to the size of a single chip. This would enable the chip to fit onto a smart device like a cellphone.

Euan Allen, a Royal Academy of Engineering Research fellow at the University of Bath who was not involved with this work, said, “Using photonics and light as a platform for computing is a really exciting area of research with potentially huge implications on the speed and efficiency of our information technology landscape. The work of Sludds et al. is an exciting step toward seeing real-world implementations of such devices, introducing a new and practical edge-computing scheme whilst also exploring some of the fundamental limitations of computation at very low (single-photon) light levels.”

The research is funded, in part, by NTT Research, the National Science Foundation, the Air Force Office of Scientific Research, the Air Force Research Laboratory, and the Army Research Office.

The research was published in Science (www.doi/10.1126/science.abq8271).

Published: October 2022
Glossary
integrated optics
A thin-film device containing miniature optical components connected via optical waveguides on a transparent dielectric substrate, whose lenses, detectors, filters, couplers and so forth perform operations analogous to those of integrated electronic circuits for switching, communications and logic.
integrated photonics
Integrated photonics is a field of study and technology that involves the integration of optical components, such as lasers, modulators, detectors, and waveguides, on a single chip or substrate. The goal of integrated photonics is to miniaturize and consolidate optical elements in a manner similar to the integration of electronic components on a microchip in traditional integrated circuits. Key aspects of integrated photonics include: Miniaturization: Integrated photonics aims to...
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
optoelectronics
Optoelectronics is a branch of electronics that focuses on the study and application of devices and systems that use light and its interactions with different materials. The term "optoelectronics" is a combination of "optics" and "electronics," reflecting the interdisciplinary nature of this field. Optoelectronic devices convert electrical signals into optical signals or vice versa, making them crucial in various technologies. Some key components and applications of optoelectronics include: ...
chip
1. A localized fracture at the end of a cleaved optical fiber or on a glass surface. 2. An integrated circuit.
modulation
In general, changes in one oscillation signal caused by another, such as amplitude or frequency modulation in radio which can be done mechanically or intrinsically with another signal. In optics the term generally is used as a synonym for contrast, particularly when applied to a series of parallel lines and spaces imaged by a lens, and is quantified by the equation: Modulation = (Imax – Imin)/ (Imax + Imin) where Imax and Imin are the maximum and minimum intensity levels of the image.
Opticsintegrated opticssilicon photonicsintegrated photonicsMaterialsSMARTdevicesneural networksdeep learningmemoryResearch & TechnologyeducationAmericasMITDirk EnglundoptoelectronicsMach-Zenderchipmodulationcomponentsoptical componentsoptical computingTechnology News

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.