Fusing surveillance videos and three-dimensional scene: A mixed reality system

Xiaoliang Cui, Dawar Khan, Zhenbang He, Zhanglin Cheng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Augmented Virtual Environments (AVE) or Virtual-Reality Fusion systems fuse dynamic videos with static three-dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real-time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system.

Original languageEnglish (US)
Article numbere2129
JournalComputer Animation and Virtual Worlds
Volume34
Issue number1
DOIs
StatePublished - Jan 1 2023

Keywords

  • augmented virtual environments
  • video fusion
  • video surveillance
  • virtual environments
  • virtual-reality fusion

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Fusing surveillance videos and three-dimensional scene: A mixed reality system'. Together they form a unique fingerprint.

Cite this