LOS ANGELES, August 12, 2021 – A team of scientists from UCLA and the University of Houston (UH), led by Aydogan Ozcan in collaboration with Kirill Larin, used deep learning to train a neural network to quickly reconstruct OCT images using downsampled spectral data. Although the deep learning-based image reconstruction method received much less spectral data than standard image reconstruction methods, it was able to reconstruct high-quality images without any spatial artifacts.
When downsampled spectral data is used with standard image reconstruction methods, it usually results in severe spatial artifacts in the reconstructed images.
To demonstrate the effectiveness of the deep learning-based framework for OCT imaging, the researchers blindly trained and tested a deep neural network using samples of mouse embryos imaged by a swept source OCT system. They also tested their approach on several types of human samples. A single image reconstruction network was formed with the tissue types, where a sample for each type was reserved for blind testing. During the test phase, the network systematically performed high quality image reconstructions.
Using doubly downsampled spectral data (640 spectral points per A-line), the trained neural network reconstructed 512 A-lines in 0.59ms while running on multiple GPUs. The neural network removed spatial artifacts due to downsampling and omission of spectral data points.
The trained grating produced a good match with the images of the same samples reconstructed using the full spectral OCT data (1280 spectral points per line A).
Deep learning improves image reconstruction in OCT using much less spectral data. Courtesy of the Ozcan Laboratory at UCLA.
The team further showed that their approach could be extended to process 3x downsampled spectral data per A-line, with some degradation in the performance of the reconstructed image quality compared to 2x downsampled spectral data. The researchers also demonstrated an optimized downsampling method for line A, which they created by jointly optimizing the spectral sampling locations and the corresponding image reconstruction grating. This improved the overall imaging performance by using fewer spectral data points per A-line.
As a framework, the deep learning-based image reconstruction method does not require any hardware modification to the user’s optical configuration, and it can be integrated with existing OCT systems to speed up the process of image acquisition. ‘images. Although the researchers demonstrated their approach using a scanned source OCT system, they stated that their OCT image reconstruction framework can also be used in various spectral domain OCT systems that acquire data from Spectral interferometry for 3D imaging of samples.
“These results highlight the transformative potential of this neural network-based OCT image reconstruction framework, which can be easily integrated with various spectral domain OCT systems, to improve their 3D imaging speed without sacrificing resolution.” or the signal-to-noise ratio of the reconstructed images, âOzcan said.
The research was published in Light: science and applications (www.doi.org/10.1038/s41377-021-00594-7).