Scientists at the University of Pennsylvania in the United States have developed a remarkable scalable 9.3 mm square microchip capable of both detecting and classifying an image in less than a nanosecond. In other words, it can process nearly two billion frames per second. It is both wonderful and frightening. The technology is so fast and efficient because it directly processes the light it receives from an “object”, thus avoiding the traditional computing need for a large separate memory unit. It also removes other time and energy consuming procedures and mechanisms, namely the need to convert optical signals to electrical pulses, the conversion of input data to binary format, and the limitations of clock-based calculations. . The result is a massively increased processing speed.
News of the breakthrough was reported in Penn todaythe daily online university newspaper and, in more detail, in Naturethe venerable and influential weekly scientific journal based in the UK which, founded in 1869, presents peer-reviewed research in various academic disciplines, primarily science and technology, and is one of the most highly cited scientific journals in the world.
The magazine’s abstract on development explains, “Deep neural networks with applications ranging from computer vision to medical diagnostics are typically implemented using clock-based processors in which the computational speed is mainly limited by clock frequency and memory access time. In the optical domain, despite advances in photonic computing, the lack of on-chip scalable optical nonlinearity and the loss of photonic devices limit the scalability of deep optical networks.
He adds: “Here, we report an integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through the direct processing of optical waves striking the on-chip pixel array as they pass through. propagate through layers of neurons. In each neuron, the linear computation is performed optically and the nonlinear activation function is performed optoelectronically, allowing classification time of less than 570 ps, which is comparable to a single clock cycle of advanced digital platforms.”
Additionally, “uniformly distributed feed light provides the same optical output range per neuron, allowing scalability to large-scale PDNNs. Two- and four-class classification of handwritten letters with accuracies greater than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-free processing of optical data eliminates analog-to-digital conversion and the need for a large memory module, enabling faster, more power-efficient neural networks for next-generation deep learning systems.
The chip mimics the makeup of the human brain in some ways. Optical neurons are interconnected via waveguides and thus constitute a deep network of many “neural layers” through which data passes: as information makes its way through these layers, they “see” and classify the input image into a learned category.
The University of Pennsylvania research team is led by Firooz Aflatouni, an associate professor of electrical and systems engineering, along with postdoctoral colleagues Farshid Ashtiani and graduate student Alexander Geers. The group tested the new photonic chip by classifying a set of 216 letters as “p” or “d”, while another set of 432 letters were designated as “p”, “d”, “a” or ” you”.
Repeated experiments showed that the chip returned 93.8% accuracy and the deep neural network returned 89.8% accuracy. Aflatouni told the publication IEEE Spectrum, “Computation by propagation, where computation takes place as the wave propagates through a medium, can perform computation at the speed of light.” And there is the promise of a not too distant future.
Of course, this is just the beginning and the experimental chip is a proof of concept (PoC) rather than anything close to a commercially available product. However, development work continues apace, and hopes are high that the technology will have far-reaching and game-changing effects over the next few years.
Unfortunately, given the potential power of this new iteration of AI and the seemingly unstoppable trend of dictatorships that rely on the construction and imposition of technological dystopias in various parts of the world, it is highly likely that some of these effects will be malignant and used for the control of individuals and the domination and repression of entire societies and not for the benefit of humanity.