Trajectory-based Fisher kernel representation for action recognition in videos

Indriyati Atmosukarto*, Bernard Ghanem, Narendra Ahuja

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Scopus citations

Abstract

Action recognition is an important computer vision problem that has many applications including video indexing and retrieval, event detection, and video summarization. In this paper, we propose to apply the Fisher kernel paradigm to action recognition. The Fisher kernel framework combines the strengths of generative and discriminative models. In this approach, given the trajectories extracted from a video and a generative Gaussian Mixture Model (GMM), we use the Fisher Kernel method to describe how much the GMM parameters are modified to best fit the video trajectories. We experiment in using the Fisher Kernel vector to create the video representation and to train an SVM classifier. We further extend our framework to select the most discriminative trajectories using a novel MIL-KNN framework. We compare the performance of our approach to the current state-of-the-art bag-of-features (BOF) approach on two benchmark datasets. Experimental results show that our proposed approach outperforms the state-of-the-art method [8] and that the selected discriminative trajectories are descriptive of the action class.

Original languageEnglish (US)
Title of host publicationProceedings of the 21st International Conference on Pattern Recognition (ICPR2012)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
StatePublished - 2012

Fingerprint

Dive into the research topics of 'Trajectory-based Fisher kernel representation for action recognition in videos'. Together they form a unique fingerprint.

Cite this