Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding

Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, Jesper Tegner, Y.-C. James Tsai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Scopus citations

Abstract

Performing driving behaviors based on causal reasoning is essential to ensure driving safety. In this work, we investigated how state-of-the-art 3D Convolutional Neural Networks (CNNs) perform on classifying driving behaviors based on causal reasoning. We proposed a perturbation-based visual explanation method to inspect the models’ performance visually. By examining the video attention saliency, we found that existing models could not precisely capture the causes (e.g., traffic light) of the specific action (e.g., stopping). Therefore, the Temporal Reasoning Block (TRB) was proposed and introduced to the models. With the TRB models, we achieved the accuracy of 86.3%, which outperform the state-of-the-art 3D CNNs from previous works. The attention saliency also demonstrated that TRB helped models focus on the causes more precisely. With both numerical and visual evaluations, we concluded that our proposed TRB models were able to provide accurate driving behavior prediction by learning the causal reasoning of the behaviors.
Original languageEnglish (US)
Title of host publicationICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
ISBN (Print)978-1-5090-6632-2
DOIs
StatePublished - 2020

Fingerprint

Dive into the research topics of 'Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding'. Together they form a unique fingerprint.

Cite this