(News from Nanowerk) Artificial intelligence (AI) plays an important role in many systems, from predictive text to medical diagnostics. Inspired by the human brain, many AI systems are implemented based on artificial neural networks, where electrical equivalents of biological neurons are interconnected, trained with a set of known data, such as images, and then used to recognize or classify new data points.
In traditional neural networks used for image recognition, the image of the target object is first formed on an image sensor, such as a smart phone’s digital camera. Next, the image sensor converts the light into electrical signals, and ultimately into binary data, which can then be processed, analyzed, stored, and classified using computer chips. Accelerating these capabilities is key to improving a number of applications, such as facial recognition, automatic text detection on photos, or helping self-driving cars recognize obstacles.
While current consumer-grade image classification technology on a digital chip can perform billions of calculations per second, making it fast enough for most applications, more sophisticated image classification such as identification of moving objects, identification of 3D objects or classification of microscopic cells in the body, push the computational limits of the most powerful technology. The current speed limit of these technologies is set by the clock-based timing of computational steps in a computer processor, where computations occur one after another in a linear schedule.
To address this limitation, Penn engineers created the first scalable chip that classifies and recognizes images almost instantly. Firooz Aflatouni, associate professor of electrical and systems engineering, along with postdoctoral fellow Farshid Ashtiani and graduate student Alexander J. Geers, eliminated the four main time-consuming culprits of the traditional computer chip: converting optical signals to electrical signals , the need to convert input data to binary format, a large memory module and clock-based calculations.
They achieved this through direct processing of light received from the object of interest using an optical deep neural network implemented on a 9.3 square millimeter chip.
The study, published in Nature (“An on-chip photonic deep neural network for image classification”), describes how the chip’s many optical neurons are interconnected using optical wires or “waveguides” to form a deep network of many “layers of neurons” mimicking that of the human brain. Information passes through the layers of the network, with each step helping to classify the input image into one of its learned categories. In the researchers’ study, the images classified by the chip were hand-drawn, letter-like characters.
Much like the neural network in our brain, this deep network is designed to allow rapid processing of information. The researchers demonstrated that their chip can perform entire image classification in half a nanosecond – the time it takes traditional digital computer chips to perform a single computational step on their clock-based schedule.
“Our chip processes information via what we call ‘propagation computing’, meaning that unlike clock-based systems, computations occur as light propagates through the chip,” says Aflatouni. “We also skip the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and these two changes make our chip a much faster technology.”
The chip’s ability to directly process optical signals lends itself to another advantage.
“When today’s computer chips process electrical signals, they often pass them through a graphics processing unit, or GPU, which takes up space and power,” says Ashtiani. “Our chip doesn’t need to store the information, eliminating the need for a large memory unit.”
“And, by eliminating the memory unit that stores images, we also increase data privacy,” says Aflatouni. “With chips that read image data directly, there is no need for photo storage and therefore data leakage does not occur.”
A chip that reads information at the speed of light and offers a higher degree of cybersecurity would undoubtedly have an impact in many areas; this is one of the reasons why research into this technology has accelerated in recent years.
“We’re not the first to come up with technology that directly reads optical signals,” says Geers, “but we’re the first to create the complete system within a chip that is both compatible with existing technology and scalable to work with more complex technologies. Data.”
The chip, with its deep-network design, requires training to learn and classify new data sets, the same way humans learn. When presented with a given set of data, the deep network collects the information and classifies it into previously learned categories. This training must strike a balance specific enough to result in accurate image classifications and general enough to be useful when presented with new data sets. Engineers can “extend” the deep network by adding more neural layers, allowing the chip to read data in more complex images with higher resolution.
And, while this new chip advances current image sensing technology, it can be used for countless applications on a variety of data types.
“What’s really interesting about this technology is that it can do so much more than classify images,” says Aflatouni. “We already know how to convert many types of data in the electrical domain – images, audio, speech and many other types of data. Now we can convert different types of data into the optical domain and have them processed almost instantly using this technology.
But what does it look like when information is processed at the speed of light?
“To understand how fast this chip can process information, think about a typical frame rate for movies,” he continues. “A movie usually plays between 24 and 120 frames per second. This chip will be able to process nearly 2 billion images per second! For problems that require light speed calculations, we now have a solution, but many applications may not be imaginable at this time.
With a technology that has many applications, it is important to understand its capabilities and limitations at more fundamental levels, and Aflatouni’s current and future plans for this research will do just that.
“Our next steps in this research will look at the scalability of the chip as well as work on three-dimensional object classification,” says Aflatouni. “So maybe we will venture into the realm of non-optical data classification. While image classification is one of the early research areas of this chip, I’m excited to see how it will be used, perhaps with digital platforms, to speed up different kinds of computations.