The method could help improve the color of electronic screens and create more natural LED lighting


If you’ve ever tried to capture a sunset with your smartphone, you know that colors don’t always match what you see in real life. Researchers are getting closer to solving this problem with a new set of algorithms that allow color to be recorded and displayed in digital images in a much more realistic way.

“When we see a beautiful scene, we want to record it and share it with others,” said Min Qiu, head of the Photonics and Instrumentation for Nanotechnology (PAINT) Laboratory at Westlake University in China. “But we don’t want to see a digital photo or video with the wrong colors. Our new algorithms can help developers of digital cameras and electronic displays better match their devices to our eyes.”

In Optical, the Optical Society’s (OSA) journal for high-impact research, Qiu and colleagues describe a novel approach to digitizing color. It can be applied to cameras and displays – including those used for computers, televisions and mobile devices – and used to fine-tune the color of LED lighting.

“Our new approach can improve currently commercially available displays or enhance the sense of reality for new technologies such as near-eye displays for virtual reality and augmented reality glasses,” said Jiyong Wang, member of the PAINT research team. “It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate the circadian rhythm in people who lack exposure. in the sun, for example.”

Mix digital color

Digital colors such as those on a television or smartphone screen are typically created by combining red, green, and blue (RGB), with each color associated with a value. For example, an RGB value of (255, 0, 0) represents pure red. The RGB value reflects a relative mixing ratio of three primary lights produced by an electronic device. However, not all devices produce this primary light in the same way, which means that identical RGB coordinates can look like different colors on different devices.

There are also other means, or color spaces, used to define colors such as hue, saturation, value (HSV) or cyan, magenta, yellow, and black (CMYK). To allow colors to be compared in different color spaces, the International Commission on Illumination (CIE) has published standards for defining the colors visible to humans based on the optical responses of our eyes. Applying these standards requires scientists and engineers to convert digital computer color spaces such as RGB to CIE color spaces when designing and calibrating their electronic devices.

In the new work, the researchers developed algorithms that directly correlate digital signals with colors in a standard CIE color space, making color space conversions unnecessary. Colors, as defined by CIE standards, are created by additive mixing of colors. This process involves calculating CIE values ​​for primary lights driven by digital signals, then mixing them to create the color. To encode colors according to CIE standards, algorithms convert digital pulse signals for each primary color into unique coordinates for the CIE color space. To decode colors, another algorithm extracts digital signals of an expected color in the CIE color space.

“Our new method maps digital signals directly to a CIE color space,” Wang said. “Since such a color space is not device dependent, the same values ​​should be perceived as the same color even if different devices are used. Our algorithms also independently and accurately process other important color properties, such as brightness and chromaticity.

Create accurate colors

The researchers tested their new algorithms with lighting, display and sensing applications involving LEDs and lasers. Their results agreed very well with their expectations and calculations. For example, they showed that chromaticity, which is a measure of color independent of brightness, could be controlled with a deviation of only ~0.0001 for LEDs and 0.001 for lasers. These values ​​are so small that most people would not be able to perceive any color difference.

The researchers say the method is ready for application to commercially available LED lights and displays. However, achieving the ultimate goal of replicating exactly what we see with our eyes will require solving additional scientific and technical problems. For example, to record a scene as we see it, the color sensors of a digital camera would have to react to light in the same way as the photoreceptors of our eyes.

To continue their work, the researchers are using cutting-edge nanotechnology to improve the sensitivity of color sensors. This could be applied to machine vision technologies to help people with color blindness, for example.

Previous Disturbing satellite images show plants in California deserts are disappearing and dying
Next Open source camera system that images natural habitats as they appear to rodents