SEPARATING BACKGROUND AND FOREGROUND IN VIDEO BASED ON A NONPARAMETRIC BAYESIAN MODEL

Xinghao Ding*, Lawrence Carin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Separating background and foreground in video is a fundamental problem in computer vision. We present a Bayesian hierarchical model to address this challenge, and apply it to video with dynamic scenes. The model uses a nonparametric prior, a beta-bernoulli process, for both the background and foreground representation. Additionally, the model uses neighborhood information of each pixel to encourage group clustering of the foreground. A collapsed Gibbs sampler is used for efficient posterior inference. Experimental results show competitive performance of the proposed model.

Original languageEnglish
Title of host publication2011 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP)
PublisherIEEE
Pages321-324
Number of pages4
StatePublished - 2011
Externally publishedYes
EventIEEE Statistical Signal Processing Workshop (SSP) - Nice, France
Duration: Jun 28 2011Jun 30 2011

Conference

ConferenceIEEE Statistical Signal Processing Workshop (SSP)
Country/TerritoryFrance
CityNice
Period06/28/1106/30/11

Keywords

  • Background subtraction
  • dynamic scenes
  • nonparametric Bayesian hierarchical model
  • beta-bernoulli process
  • group sparsity

Fingerprint

Dive into the research topics of 'SEPARATING BACKGROUND AND FOREGROUND IN VIDEO BASED ON A NONPARAMETRIC BAYESIAN MODEL'. Together they form a unique fingerprint.

Cite this