SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation

Bing Li, Cheng Zheng, Silvio Giancola, Bernard Ghanem

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Scopus citations

Abstract

We propose a novel scene flow estimation approach to capture and infer 3D motions from point clouds. Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform. Such unstructured data poses difficulties in matching corresponding points between point clouds, leading to inaccurate flow estimation. We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer. Specifically, by leveraging the sparse convolution, SCTN transfers irregular point cloud into locally consistent flow features for estimating spatially consistent motions within an object/local object part. We further propose to explicitly learn point relations using a point transformer module, different from exiting methods. We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation. In addition, a novel loss function is proposed to adaptively encourage flow consistency according to feature similarity. Extensive experiments demonstrate that our proposed approach achieves a new state of the art in scene flow estimation. Our approach achieves an error of 0.038 and 0.037 (EPE3D) on FlyingThings3D and KITTI Scene Flow respectively, which significantly outperforms previous methods by large margins.
Original languageEnglish (US)
Title of host publicationProceedings of the AAAI Conference on Artificial Intelligence
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Pages1254-1262
Number of pages9
DOIs
StatePublished - Jun 28 2022

Fingerprint

Dive into the research topics of 'SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation'. Together they form a unique fingerprint.

Cite this