TY - JOUR
T1 - FrankenGAN: Guided detail synthesis for building mass models using style-Synchonized Gans
AU - Kelly, Tom
AU - Guerrero, Paul
AU - Steed, Anthony
AU - Wonka, Peter
AU - Mitra, Niloy J.
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): OSR-2015-CCF-2533, OSR-CRG2017-3426
Acknowledgements: This project was supported by an ERC Starting Grant (SmartGeometry StG-2013-335373), KAUST-UCL Grant (OSR-2015-CCF-2533), ERC PoC Grant (SemanticCity), the KAUST Office of Sponsored Research (OSR-CRG2017-3426), Open3D Project (EPSRC Grant EP/M013685/1), and a Google Faculty Award (UrbanPlan).
PY - 2018/11/28
Y1 - 2018/11/28
N2 - Coarse building mass models are now routinely generated at scales ranging from individual buildings to whole cities. Such models can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful geometric or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs that creates plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods.We provide the user with direct control over the variability of the output. We allow him/her to interactively specify the style via images and manipulate style-adapted sliders to control style variability. We test our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of perceptual studies and are found to be realistic, semantically plausible, and consistent in style.
AB - Coarse building mass models are now routinely generated at scales ranging from individual buildings to whole cities. Such models can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful geometric or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs that creates plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods.We provide the user with direct control over the variability of the output. We allow him/her to interactively specify the style via images and manipulate style-adapted sliders to control style variability. We test our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of perceptual studies and are found to be realistic, semantically plausible, and consistent in style.
UR - http://hdl.handle.net/10754/656365
UR - http://dl.acm.org/citation.cfm?doid=3272127.3275065
UR - http://www.scopus.com/inward/record.url?scp=85066095935&partnerID=8YFLogxK
U2 - 10.1145/3272127.3275065
DO - 10.1145/3272127.3275065
M3 - Article
SN - 0730-0301
VL - 37
SP - 1
EP - 14
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 6
ER -