TY - GEN
T1 - ETAD: Training Action Detection End to End on a Laptop
AU - Liu, Shuming
AU - Xu, Mengmeng
AU - Zhao, Chen
AU - Zhao, Xu
AU - Ghanem, Bernard
N1 - KAUST Repository Item: Exported on 2023-09-26
PY - 2023/8/14
Y1 - 2023/8/14
N2 - Temporal action detection (TAD) with end-to-end training often suffers from the pain of huge demand for computing resources due to long video duration. In this work, we propose an efficient temporal action detector (ETAD) that can train directly from video frames with extremely low GPU memory consumption. Our main idea is to minimize and balance the heavy computation among features and gradients in each training iteration. We propose to sequentially forward the snippet frame through the video encoder, and backward only a small necessary portion of gradients to update the encoder. To further alleviate the computational redundancy in training, we propose to dynamically sample only a small subset of proposals during training. Various sampling strategies and ratios are studied for both the encoder and detector. ETAD achieves state-of-the-art performance on TAD benchmarks with remarkable efficiency. On ActivityNet-1.3, training ETAD in 18 hours can reach 38.25% average mAP with only 1.3 GB memory per video under end-to-end training.
AB - Temporal action detection (TAD) with end-to-end training often suffers from the pain of huge demand for computing resources due to long video duration. In this work, we propose an efficient temporal action detector (ETAD) that can train directly from video frames with extremely low GPU memory consumption. Our main idea is to minimize and balance the heavy computation among features and gradients in each training iteration. We propose to sequentially forward the snippet frame through the video encoder, and backward only a small necessary portion of gradients to update the encoder. To further alleviate the computational redundancy in training, we propose to dynamically sample only a small subset of proposals during training. Various sampling strategies and ratios are studied for both the encoder and detector. ETAD achieves state-of-the-art performance on TAD benchmarks with remarkable efficiency. On ActivityNet-1.3, training ETAD in 18 hours can reach 38.25% average mAP with only 1.3 GB memory per video under end-to-end training.
UR - http://hdl.handle.net/10754/694596
UR - https://ieeexplore.ieee.org/document/10208823/
UR - http://www.scopus.com/inward/record.url?scp=85170823216&partnerID=8YFLogxK
U2 - 10.1109/CVPRW59228.2023.00476
DO - 10.1109/CVPRW59228.2023.00476
M3 - Conference contribution
SN - 9798350302493
SP - 4525
EP - 4534
BT - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
PB - IEEE
ER -