TY - JOUR
T1 - Graph Neural Network and Spatiotemporal Transformer Attention for 3D Video Object Detection from Point Clouds
AU - Yin, Junbo
AU - Shen, Jianbing
AU - Gao, Xin
AU - Crandall, David
AU - Yang, Ruigang
N1 - KAUST Repository Item: Exported on 2021-11-11
PY - 2021
Y1 - 2021
N2 - Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., the point cloud videos. We empirically categorize the temporal information into short-term and long-term patterns. To encode the short-term data, we present a Grid Message Passing Network (GMPNet), which considers each grid (i.e., the grouped points) as a node and constructs a k-NN graph with the neighbor grids. To update features for a grid, GMPNet iteratively collects information from its neighbors, thus mining the motion cues in grids from nearby frames. To further aggregate the long-term frames, we propose an Attentive Spatiotemporal Transformer GRU (AST-GRU), which contains a Spatial Transformer Attention (STA) module and a Temporal Transformer Attention (TTA) module. STA and TTA enhance the vanilla GRU to focus on small objects and better align the moving objects. Our overall framework supports both online and offline video object detection in point clouds. The evaluation results on the challenging nuScenes benchmark show the superior performance of our method, achieving 1st on the leaderboard without any bells and whistles, by the time the paper is submitted.
AB - Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., the point cloud videos. We empirically categorize the temporal information into short-term and long-term patterns. To encode the short-term data, we present a Grid Message Passing Network (GMPNet), which considers each grid (i.e., the grouped points) as a node and constructs a k-NN graph with the neighbor grids. To update features for a grid, GMPNet iteratively collects information from its neighbors, thus mining the motion cues in grids from nearby frames. To further aggregate the long-term frames, we propose an Attentive Spatiotemporal Transformer GRU (AST-GRU), which contains a Spatial Transformer Attention (STA) module and a Temporal Transformer Attention (TTA) module. STA and TTA enhance the vanilla GRU to focus on small objects and better align the moving objects. Our overall framework supports both online and offline video object detection in point clouds. The evaluation results on the challenging nuScenes benchmark show the superior performance of our method, achieving 1st on the leaderboard without any bells and whistles, by the time the paper is submitted.
UR - http://hdl.handle.net/10754/673273
UR - https://ieeexplore.ieee.org/document/9609569/
U2 - 10.1109/TPAMI.2021.3125981
DO - 10.1109/TPAMI.2021.3125981
M3 - Article
C2 - 34752380
SN - 1939-3539
SP - 1
EP - 1
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -