TY - JOUR
T1 - Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing with Programmable Sensors
AU - Martel, Julien N.P.
AU - Muller, Lorenz K.
AU - Carey, Stephen J.
AU - Dudek, Piotr
AU - Wetzstein, Gordon
N1 - KAUST Repository Item: Exported on 2022-06-14
Acknowledgements: The authors would like to thank Greg Zaal for the access to his repository of HDR images (http://hdrihaven.com), used in their experiments. The work of J.N.P.M. was supported by a Swiss National Foundation (SNF) Fellowship (P2EZP2_181817), the work of G.W. was supported by an NSF CAREER Award (IIS 1553333), a Sloan Fellowship, by the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant, and a PECASE by the ARL.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2020/4/13
Y1 - 2020/4/13
N2 - Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.
AB - Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.
UR - http://hdl.handle.net/10754/678963
UR - https://ieeexplore.ieee.org/document/9064896/
UR - http://www.scopus.com/inward/record.url?scp=85086051609&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2020.2986944
DO - 10.1109/TPAMI.2020.2986944
M3 - Article
SN - 1939-3539
VL - 42
SP - 1642
EP - 1653
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 7
ER -