Extracting a fluid dynamic texture and the background from video

Bernard Ghanem*, Narendra Ahuja

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

Given the video of a still background occluded by a fluid dynamic texture (FDT), this paper addresses the problem of separating the video sequence into its two constituent layers. One layer corresponds to the video of the unoccluded background, and the other to that of the dynamic texture, as it would appear if viewed against a black background. The model of the dynamic texture is unknown except that it represents fluid flow. We present an approach that uses the image motion information to simultaneously obtain a model of the dynamic texture and separate it from the background which is required to be still. Previous methods have considered occluding layers whose dynamics follows simple motion models (e.g. periodic or 2D parametric motion). FDTs considered in this paper exhibit complex stochastic motion. We consider videos showing an FDT layer (e.g. pummeling smoke or heavy rain) in front of a static background layer (e.g. brick building). We propose a novel method for simultaneously separating these two layers and learning a model for the FDT. Due to the fluid nature of the DT, we are required to learn a model for both the spatial appearance and the temporal variations (due to changes in density) of the FDT, along with a valid estimate of the background. We model the frames of a sequence as being produced by a continuous HMM, characterized by transition probabilities based on the Navier-Stokes equations for fluid dynamics, and by generation probabilities based on the convex matting of the FDT with the background. We learn the FDT appearance, the FDT temporal variations, and the background by maximizing their joint probability using Interactive Conditional Modes (ICM). Since the learned model is generative, it can be used to synthesize new videos with different backgrounds and density variations. Experiments on videos that we compiled demonstrate the performance of our method.

Original languageEnglish (US)
Title of host publication26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
DOIs
StatePublished - 2008
Externally publishedYes
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR - Anchorage, AK, United States
Duration: Jun 23 2008Jun 28 2008

Publication series

Name26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

Other

Other26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Country/TerritoryUnited States
CityAnchorage, AK
Period06/23/0806/28/08

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Extracting a fluid dynamic texture and the background from video'. Together they form a unique fingerprint.

Cite this