TY - JOUR
T1 - Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks
AU - Alghamdi, Masheal M.
AU - Fu, Qiang
AU - Thabet, Ali Kassem
AU - Heidrich, Wolfgang
N1 - KAUST Repository Item: Exported on 2021-03-30
Acknowledgements: The authors are grateful to the best paper committee of VMV 2019 for recommending the original paper [AFTH19] to Computer Graphics Forum and providing us with an opportunity to present this extended work. This work was supported by King Abdullah University of Science and Technology as part of VCC center baseline funding. Masheal Alghamdi is supported by King Abdulaziz City for Science and Technology scholarship.
PY - 2021/3/11
Y1 - 2021/3/11
N2 - High dynamic range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this paper we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware and building a deep learning algorithm to reconstruct the HDR image. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large-scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction. We achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware–software so lution offers a flexible yet robust way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparison results show that our method outperforms the state of the art in terms of visual perception quality.
AB - High dynamic range (HDR) image acquisition from a single image capture, also known as snapshot HDR imaging, is challenging because the bit depths of camera sensors are far from sufficient to cover the full dynamic range of the scene. Existing HDR techniques focus either on algorithmic reconstruction or hardware modification to extend the dynamic range. In this paper we propose a joint design for snapshot HDR imaging by devising a spatially varying modulation mask in the hardware and building a deep learning algorithm to reconstruct the HDR image. We leverage transfer learning to overcome the lack of sufficiently large HDR datasets available. We show how transferring from a different large-scale task (image classification on ImageNet) leads to considerable improvements in HDR reconstruction. We achieve a reconfigurable HDR camera design that does not require custom sensors, and instead can be reconfigured between HDR and conventional mode with very simple calibration steps. We demonstrate that the proposed hardware–software so lution offers a flexible yet robust way to modulate per-pixel exposures, and the network requires little knowledge of the hardware to faithfully reconstruct the HDR image. Comparison results show that our method outperforms the state of the art in terms of visual perception quality.
UR - http://hdl.handle.net/10754/668354
UR - https://onlinelibrary.wiley.com/doi/10.1111/cgf.14205
UR - http://www.scopus.com/inward/record.url?scp=85102621881&partnerID=8YFLogxK
U2 - 10.1111/cgf.14205
DO - 10.1111/cgf.14205
M3 - Article
SN - 0167-7055
JO - Computer Graphics Forum
JF - Computer Graphics Forum
ER -