TY - GEN
T1 - Zero-Shot 3D Shape Correspondence
AU - Abdelreheem, Ahmed
AU - Eldesokey, Abdelrahman
AU - Ovsjanikov, Maks
AU - Wonka, Peter
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/12/10
Y1 - 2023/12/10
N2 - We propose a novel zero-shot approach to computing correspondences between 3D shapes. Existing approaches mainly focus on isometric and near-isometric shape pairs (e.g., human vs. human), but less attention has been given to strongly non-isometric and inter-class shape matching (e.g., human vs. cow). To this end, we introduce a fully automatic method that exploits the exceptional reasoning capabilities of recent foundation models in language and vision to tackle difficult shape correspondence problems. Our approach comprises multiple stages. First, we classify the 3D shapes in a zero-shot manner by feeding rendered shape views to a language-vision model (e.g., BLIP2) to generate a list of class proposals per shape. These proposals are unified into a single class per shape by employing the reasoning capabilities of ChatGPT. Second, we attempt to segment the two shapes in a zero-shot manner, but in contrast to the co-segmentation problem, we do not require a mutual set of semantic regions. Instead, we propose to exploit the in-context learning capabilities of ChatGPT to generate two different sets of semantic regions for each shape and a semantic mapping between them. This enables our approach to match strongly non-isometric shapes with significant differences in geometric structure. Finally, we employ the generated semantic mapping to produce coarse correspondences that can further be refined by the functional maps framework to produce dense point-to-point maps. Our approach, despite its simplicity, produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
AB - We propose a novel zero-shot approach to computing correspondences between 3D shapes. Existing approaches mainly focus on isometric and near-isometric shape pairs (e.g., human vs. human), but less attention has been given to strongly non-isometric and inter-class shape matching (e.g., human vs. cow). To this end, we introduce a fully automatic method that exploits the exceptional reasoning capabilities of recent foundation models in language and vision to tackle difficult shape correspondence problems. Our approach comprises multiple stages. First, we classify the 3D shapes in a zero-shot manner by feeding rendered shape views to a language-vision model (e.g., BLIP2) to generate a list of class proposals per shape. These proposals are unified into a single class per shape by employing the reasoning capabilities of ChatGPT. Second, we attempt to segment the two shapes in a zero-shot manner, but in contrast to the co-segmentation problem, we do not require a mutual set of semantic regions. Instead, we propose to exploit the in-context learning capabilities of ChatGPT to generate two different sets of semantic regions for each shape and a semantic mapping between them. This enables our approach to match strongly non-isometric shapes with significant differences in geometric structure. Finally, we employ the generated semantic mapping to produce coarse correspondences that can further be refined by the functional maps framework to produce dense point-to-point maps. Our approach, despite its simplicity, produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
KW - 3D Semantic Segmentation
KW - 3D Shape Matching
KW - Deep Neural Networks
KW - Zero-Shot Shape Correspondence
UR - http://www.scopus.com/inward/record.url?scp=85181775963&partnerID=8YFLogxK
U2 - 10.1145/3610548.3618228
DO - 10.1145/3610548.3618228
M3 - Conference contribution
AN - SCOPUS:85181775963
T3 - Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023
BT - Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023
A2 - Spencer, Stephen N.
PB - Association for Computing Machinery, Inc
T2 - 2023 SIGGRAPH Asia 2023 Conference Papers, SA 2023
Y2 - 12 December 2023 through 15 December 2023
ER -