TY - JOUR
T1 - On-the-fly generation and rendering of infinite cities on the GPU
AU - Steinberger, Markus
AU - Kenzel, Michael
AU - Kainz, Bernhard K.
AU - Wonka, Peter
AU - Schmalstieg, Dieter
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: This research was funded by the Austrian Science Fund (FWF): P23329.
PY - 2014/6/1
Y1 - 2014/6/1
N2 - In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
AB - In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
UR - http://hdl.handle.net/10754/563530
UR - https://youtu.be/DFtCyaBpxCk
UR - http://www.scopus.com/inward/record.url?scp=84901852753&partnerID=8YFLogxK
U2 - 10.1111/cgf.12315
DO - 10.1111/cgf.12315
M3 - Article
SN - 0167-7055
VL - 33
SP - 105
EP - 114
JO - Computer Graphics Forum
JF - Computer Graphics Forum
IS - 2
ER -