TY - GEN
T1 - A Context-Aware Loss Function for Action Spotting in Soccer Videos
AU - Cioppa, Anthony
AU - Deliège, Adrien
AU - Giancola, Silvio
AU - Ghanem, Bernard
AU - Droogenbroeck, Marc Van
AU - Gade, Rikke
AU - Moeslund, Thomas B.
N1 - KAUST Repository Item: Exported on 2021-03-26
Acknowledged KAUST grant number(s): OSR-CRG2017-3405
Acknowledgements: This work is supported by the DeepSport project of the Walloon region and the FRIA (Belgium), as well as the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405.
PY - 2020
Y1 - 2020
N2 - In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12.8% over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation.
AB - In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12.8% over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation.
UR - http://hdl.handle.net/10754/660728
UR - https://ieeexplore.ieee.org/document/9156359/
UR - http://www.scopus.com/inward/record.url?scp=85094809735&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.01314
DO - 10.1109/CVPR42600.2020.01314
M3 - Conference contribution
SN - 978-1-7281-7169-2
SP - 13123
EP - 13133
BT - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PB - IEEE
ER -