TY - JOUR
T1 - 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models
AU - Biao, Zhang
AU - Tang, Jiapeng
AU - Nießner, Matthias
AU - Wonka, Peter
N1 - KAUST Repository Item: Exported on 2023-09-07
Acknowledgements: We would like to acknowledge Anna Frühstück for helping with figures and the video voiceover. This work was supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI) as well as the ERC Starting Grant Scan2CAD (804724).
PY - 2023/7/26
Y1 - 2023/7/26
N2 - We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. Our shape representation can encode 3D shapes given as surface models or point clouds, and represents them as neural fields. The concept of neural fields has previously been combined with a global latent vector, a regular grid of latent vectors, or an irregular grid of latent vectors. Our new representation encodes neural fields on top of a set of vectors. We draw from multiple concepts, such as the radial basis function representation, and the cross attention and self-attention function, to design a learnable representation that is especially suitable for processing with transformers. Our results show improved performance in 3D shape encoding and 3D shape generative modeling tasks. We demonstrate a wide variety of generative applications: unconditioned generation, category-conditioned generation, text-conditioned generation, point-cloud completion, and image-conditioned generation.
AB - We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. Our shape representation can encode 3D shapes given as surface models or point clouds, and represents them as neural fields. The concept of neural fields has previously been combined with a global latent vector, a regular grid of latent vectors, or an irregular grid of latent vectors. Our new representation encodes neural fields on top of a set of vectors. We draw from multiple concepts, such as the radial basis function representation, and the cross attention and self-attention function, to design a learnable representation that is especially suitable for processing with transformers. Our results show improved performance in 3D shape encoding and 3D shape generative modeling tasks. We demonstrate a wide variety of generative applications: unconditioned generation, category-conditioned generation, text-conditioned generation, point-cloud completion, and image-conditioned generation.
UR - http://hdl.handle.net/10754/687474
UR - https://dl.acm.org/doi/10.1145/3592442
UR - http://www.scopus.com/inward/record.url?scp=85166480742&partnerID=8YFLogxK
U2 - 10.1145/3592442
DO - 10.1145/3592442
M3 - Article
SN - 1557-7368
VL - 42
SP - 1
EP - 16
JO - ACM transactions on graphics
JF - ACM transactions on graphics
IS - 4
ER -