On-chip neural network classifies images almost instantly | Tech News


An optical deep neural network integrated on a chip could be a significant advance for image classification technology. Its on-chip neural network can recognize and classify an image in less than a nanosecond.

In addition, the photonic neural network, developed at the University of Pennsylvania, is scalable to allow it to classify images of increasing complexity.

The speed of the on-chip deep neural network stems from its ability to directly process the light it receives from an object of interest. It does not need to convert optical signals to electrical signals or change input data to binary format before it can recognize images.


Using a deep neural network of optical waveguides, the researchers’ chip – smaller than a square centimeter – can detect and classify an image in less than a nanosecond, without the need for a processor or unit. separate memory. Courtesy of Ella Maru studio.


The waveguides connect the chip’s optical neurons, forming a multi-layered network similar to the neuronal layers of the human brain. As information moves through the layers of the network, the input image is classified with increasing accuracy into a category that the network has previously learned.

The researchers ensured that the training for the network was specific enough to result in accurate image classifications, but general enough to be useful to the network when presented with new datasets. The network can be scaled by adding neural layers. As layers are added, the ability of the network to read data in more complex images, with higher resolution, increases.

Although current on-chip image classification technology can perform billions of calculations per second, the calculation speed is limited by a clock-based linear processing scheme which, in traditional systems, requires that the calculation steps be done one after the other.

In contrast, the on-chip deep neural network directly processes optical waves as they propagate through the layers of the network. The nonlinear activation function is realized optoelectronically and allows a classification time of less than 570 ps.

“Our chip processes information via what we call ‘propagation computing’, which means that unlike clock-based systems, computations occur when light propagates through the chip,” said Farshid Ashtiani. , labor researcher. “We also skip the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and these two changes make our chip a much faster technology.”

Firooz Aflatouni, associate professor of electrical and systems engineering at the University of Pennsylvania.  Courtesy of the University of Pennsylvania.


Firooz Aflatouni, associate professor of electrical and systems engineering at the University of Pennsylvania. Courtesy of the University of Pennsylvania.


The researchers showed that the chip can perform an entire image classification in 0.5 ns – the time it takes traditional digital computer chips to perform a single computational step. The network-on-chip demonstrated two- and four-class classification of handwritten letters with accuracies better than 93.8% and 89.8%, respectively.

“To understand how fast this chip can process information, think about a typical frame rate for movies,” said Professor Firooz Aflatouni. “A movie usually plays between 24 and 120 frames per second. This chip will be able to process nearly 2 billion images per second.

In addition to eliminating analog-to-digital conversion, the chip’s direct, clockless optical data processing eliminates the need for a memory module, enabling faster and more power-efficient neural networks. “When today’s computer chips process electrical signals, they often pass them through a graphics processing unit, or GPU, which takes up space and power,” Ashtiani said. “Our chip doesn’t need to store the information, eliminating the need for a large memory unit.”

Eliminating a memory module can also increase data privacy. “With chips that read image data directly, there is no need for photo storage and therefore data leakage does not occur,” Aflatouni said.

As a proof of concept, the chip was tested on datasets containing two or four types of handwritten characters.  It achieved classification accuracies of over 93.8% and 89.8%, respectively.  Courtesy of the University of Pennsylvania.


As a proof of concept, the chip was tested on datasets containing two or four types of handwritten characters. It achieved classification accuracies of over 93.8% and 89.8%, respectively. Courtesy of the University of Pennsylvania.


By accelerating image classification, the on-chip deep neural network could improve applications such as facial recognition and lidar detection in self-driving cars. The on-chip photonic deep neural network can be used for applications over a range of data types.

“We already know how to convert many data types in the electrical domain – images, audio, speech and many other data types,” Aflatouni said. “Now we can convert different types of data into the optical domain and have them processed almost instantaneously using this technology.”

The next steps for the team will be to further study the scalability of the chip and explore the classification of 3D objects.

The research has been published in Nature (www.doi.org/10.1038/s41586-022-04714-0).

Previous Satellite images show damage to Syrian weapons base
Next Apple iPhone 14 leaked images show radical redesign, new colors