![]()
New Camera System Focuses on Everything, Everywhere, All At Once
A team of researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, has developed a camera system with a specialized lens that can focus individual pixels to different depths, producing images in which everything is sharp across the frame. The project, on spatially-varying autofocus, was presented at the International Conference on Computer Vision 2025 in Honolulu and targets applications where uniform focus across complex scenes matters, such as surveillance, machine vision, and microscopy.
Unlike typical camera autofocus, which sets a single focus plane for the entire sensor, this system creates a freeform depth of field by allowing different pixel regions to be focused to different distances. That lets the focal plane conform to irregular or highly varied scene geometry, so parts of the image at many depths can all be brought into focus.
Photographers have several ways to increase the amount of an image that appears sharp, but each method has trade-offs. Focus stacking requires multiple shots and fails with moving subjects. Stopping down the aperture increases depth of field but can reduce resolution due to diffraction. Light field cameras capture depth information but suffer resolution limits. The Carnegie Mellon approach needs a single preparatory image to estimate scene geometry and then produces an all-in-focus image in the subsequent capture, making it suitable for dynamic scenes.
The system is notable for relying on all-optical processes rather than heavy computational post-processing. It uses an optical arrangement based on a Lohmann (Alvarez) lens and a phase-only spatial light modulator (SLM) so that each pixel can be focused at a different depth. The researchers extended traditional autofocusing methods to iteratively estimate a depth map using contrast and disparity cues, allowing the camera to progressively shape its depth of field to match the scene.
A conventional Lohmann lens is focus-tunable by moving two cubic lens elements relative to each other, but that motion changes focus across the whole image. Building on that concept, the team developed a computational variant called the Split-Lohmann lens, which can spatially vary focal length. In the prototype, a central SLM is precisely tilted to direct light from different distances onto corresponding sensor pixels, enabling independent focusing across the frame.
The prototype uses a Canon EOS R10 body equipped with a Dual Pixel CMOS sensor. Canon’s Dual Pixel approach places two photodiodes at each pixel so that phase-detect autofocus and image capture can occur simultaneously at the pixel level, which provides fast, accurate focusing. The researchers combine that capability with contrast-detect autofocus (CDAF) signals to estimate depth and set per-pixel focus.
By obtaining the all-in-focus image optically, the system aims to preserve spatial resolution while bringing an entire scene into focus at once. The researchers say this addresses limitations of prior methods that either require multiple exposures or sacrifice detail.
The work has potential implications for photographers and imaging professionals who need reliable focus across irregular subject geometry or in situations with movement. It could simplify workflow in macro, industrial, and scientific imaging where maintaining sharpness across a scene is essential.
Image credits go to the research team. The reference paper is titled “Spatially-Varying Autofocus.”
Source: PetaPixel


0 Comments