DS-MVSNet: Unsupervised Multi-view Stereo via Depth Synthesis

Jingliang Li, Zhengda Lu, Yiqun Wang, Ying Wang, Jun Xiao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

In recent years, supervised or unsupervised learning-based MVS methods achieved excellent performance compared with traditional methods. However, these methods only use the probability volume computed by cost volume regularization to predict reference depths and this manner cannot mine enough information from the probability volume. Furthermore, the unsupervised methods usually try to use two-step or additional inputs for training which make the procedure more complicated. In this paper, we propose the DS-MVSNet, an end-to-end unsupervised MVS structure with the source depths synthesis. To mine the information in probability volume, we creatively synthesize the source depths by splattering the probability volume and depth hypotheses to source views. Meanwhile, we propose the adaptive Gaussian sampling and improved adaptive bins sampling approach that improve the depths hypotheses accuracy. On the other hand, we utilize the source depths to render the reference images and propose depth consistency loss and depth smoothness loss. These can provide additional guidance according to photometric and geometric consistency in different views without additional inputs. Finally, we conduct a series of experiments on the DTU dataset and Tanks $&$ Temples dataset that demonstrate the efficiency and robustness of our DS-MVSNet compared with the state-of-the-art methods.
Original languageEnglish (US)
Title of host publicationProceedings of the 30th ACM International Conference on Multimedia
PublisherACM
DOIs
StatePublished - Oct 10 2022

Fingerprint

Dive into the research topics of 'DS-MVSNet: Unsupervised Multi-view Stereo via Depth Synthesis'. Together they form a unique fingerprint.

Cite this