Remember Light? Light was a company with the wild idea of improving smartphone and other compact cameras by using lots of tiny camera lenses. Plenty of smartphones today have multiple cameras that function as a lens kit with different optical qualities, but Light combined several lenses into a single camera. Its biggest project was a collaboration with HMD for the five-camera Nokia 9, and it also made the L16 camera, a $2,000 point-and-shoot camera with 16 lenses.
Despite the launch on a Nokia phone last year and future agreements with Sony and Xiaomi, Light has quit the smartphone business. Android Authority checked in on the company and learned that Light is “no longer operating in the smartphone industry.” Sure enough, if you visit Light’s website, the company seems focused on machine vision for self-driving cars and other robots. Most references to smartphone collaborations, like the dedicated page at light.co/smartphones, have been taken down.
Light’s technology on the Nokia 9 seemed interesting, but it was also a rather expensive, hardware-based solution that really just did image stacking, which you can do with a single camera and some fancy software. The Nokia 9 had five 12MP sensors that acted like a single camera. A single shutter press would capture five simultaneous pictures, which would then be blended together to form a single image. The sensors weren’t all the same, using a combination of RGB and monochrome cameras, allowing the phone to capture a wider range of light.
Using five cameras simultaneously required a lot of work from Light. While you can hook seven cameras up to the Nokia 9’s Snapdragon 845 SoC, the chip only has the throughput to use two at once. Using five lenses at once required Light to make a special chip, the incredibly named “Lux Capacitor” which could do the heavy lifting of handling many cameras simultaneously while sending a simplified output to the Snapdragon chip.
In addition to image stacking, Light’s camera technology was also very good at depth perception. On the Nokia 9, five cameras laid out in slightly different locations would pick up a ton of 3D information. The best competition at the time, the Pixel 3, could only compute for two layers of depth from its single camera, the foreground and background. The Nokia 9 could pick up 1,200 layers of depth, and Google even enhanced Google Photos’ GDepth photo format to support more depth layers specifically for the Nokia 9. With more depth layers, instead of a uniformly blurry background that provided a poor emulation of depth of field, the Nokia 9 could have progressive amounts of background blur depending on the depth of the scene, just like a DSLR.
The problem with Light’s approach is that it asked for a 5x increase in camera cost and complexity without offering many benefits. Smartphones already do “temporal” image stacking with a single lens—you might tap the shutter button once, but under the hood, multiple photos are taken, one after another, and stitched together. Some correction is needed for moving objects and shaky human hands, but it’s nothing that software can’t handle. Temporal image stacking negated the biggest benefit of Light’s shipping products, and while the company might have done more in the future (even with five cameras, the Nokia 9 also did temporal image stacking) it didn’t produce results. The Nokia 9’s image quality didn’t blow anyone away, and the depth effects didn’t move the needle enough.
https://arstechnica.com/?p=1683529