TY - GEN
T1 - EpiGRAF: Rethinking training of 3D GANs
AU - Skorokhodov, Ivan
AU - Tulyakov, Sergey
AU - Wang, Yiqun
AU - Wonka, Peter
N1 - KAUST Repository Item: Exported on 2023-07-10
Acknowledgements: We would like to acknowledge support from the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - A recent trend in generative modeling is building 3D-aware generators from 2D image collections. To induce the 3D bias, such models typically rely on volumetric rendering, which is expensive to employ at high resolutions. Over the past months, more than ten works have addressed this scaling issue by training a separate 2D decoder to upsample a low-resolution image (or a feature tensor) produced from a pure 3D generator. But this solution comes at a cost: not only does it break multi-view consistency (i.e., shape and texture change when the camera moves), but it also learns geometry in low fidelity. In this work, we show that obtaining a high-resolution 3D generator with SotA image quality is possible by following a completely different route of simply training the model patch-wise. We revisit and improve this optimization scheme in two ways. First, we design a location- and scale-aware discriminator to work on patches of different proportions and spatial positions. Second, we modify the patch sampling strategy based on an annealed beta distribution to stabilize training and accelerate the convergence. The resulting model, named EpiGRAF, is an efficient, high-resolution, pure 3D generator, and we test it on four datasets (two introduced in this work) at 2562 and 5122 resolutions. It obtains state-of-the-art image quality, high-fidelity geometry and trains ≈2.5× faster than the upsampler-based counterparts.
AB - A recent trend in generative modeling is building 3D-aware generators from 2D image collections. To induce the 3D bias, such models typically rely on volumetric rendering, which is expensive to employ at high resolutions. Over the past months, more than ten works have addressed this scaling issue by training a separate 2D decoder to upsample a low-resolution image (or a feature tensor) produced from a pure 3D generator. But this solution comes at a cost: not only does it break multi-view consistency (i.e., shape and texture change when the camera moves), but it also learns geometry in low fidelity. In this work, we show that obtaining a high-resolution 3D generator with SotA image quality is possible by following a completely different route of simply training the model patch-wise. We revisit and improve this optimization scheme in two ways. First, we design a location- and scale-aware discriminator to work on patches of different proportions and spatial positions. Second, we modify the patch sampling strategy based on an annealed beta distribution to stabilize training and accelerate the convergence. The resulting model, named EpiGRAF, is an efficient, high-resolution, pure 3D generator, and we test it on four datasets (two introduced in this work) at 2562 and 5122 resolutions. It obtains state-of-the-art image quality, high-fidelity geometry and trains ≈2.5× faster than the upsampler-based counterparts.
UR - http://hdl.handle.net/10754/679293
UR - https://proceedings.neurips.cc/paper_files/paper/2022/hash/9b01333262789ea3a65a5fab4c22feae-Abstract-Conference.html
UR - http://www.scopus.com/inward/record.url?scp=85163184541&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781713871088
BT - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
PB - Neural information processing systems foundation
ER -