Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging

Ilya Chugunov, Seung-Hwan Baek, Qiang Fu, Wolfgang Heidrich, Felix Heide

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

We introduce Mask-ToF, a method to reduce flying pixels (FP) in time-of-flight (ToF) depth captures. FPs are pervasive artifacts which occur around depth edges, where light paths from both an object and its background are integrated over the aperture. This light mixes at a sensor pixel to produce erroneous depth estimates, which can adversely affect downstream 3D vision tasks. Mask-ToF starts at the source of these FPs, learning a microlens-level occlusion mask which effectively creates a custom-shaped sub-aperture for each sensor pixel. This modulates the selection of foreground and background light mixtures on a per-pixel basis and thereby encodes scene geometric information directly into the ToF measurements. We develop a differentiable ToF simulator to jointly train a convolutional neural network to decode this information and produce high-fidelity, low-FP depth reconstructions. We test the effectiveness of Mask-ToF on a simulated light field dataset and validate the method with an experimental prototype. To this end, we manufacture the learned amplitude mask and design an optical relay system to virtually place it on a high-resolution ToF sensor. We find that Mask-ToF generalizes well to real data without retraining, cutting FP counts in half.
Original languageEnglish (US)
Title of host publication2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
Pages9112-9122
Number of pages11
ISBN (Print)9781665445092
DOIs
StatePublished - Jun 2021

Fingerprint

Dive into the research topics of 'Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging'. Together they form a unique fingerprint.

Cite this