Self-supervised Learning of Implicit Shape Representation with Dense Correspondence for Deformable Objects

Baowen Zhang1,2
Jiahe Li1,2
Cuixia Ma1,2
Hongan Wang1,2



We present a self-supervised method to learn neural implicit representation for deformable objects with a collection of shapes. Our method can generate shapes by deforming a learned template and get dense correspondence.



Abstract

Learning 3D shape representation with dense correspondence for deformable objects is a fundamental problem in computer vision. Existing approaches often need additional annotations of specific semantic domain, e.g., skeleton poses for human bodies or animals, which require extra annotation effort and suffer from error accumulation, and they are limited to specific domain. In this paper, we propose a novel self-supervised approach to learn neural implicit shape representation for deformable objects, which can represent shapes with a template shape and dense correspondence in 3D. Our method does not require the priors of skeleton and skinning weight, and only requires a collection of shapes represented in signed distance fields. To handle the large deformation, we constrain the learned template shape in the same latent space with the training shapes, design a new formulation of local rigid constraint that enforces rigid transformation in local region and addresses local reflection issue, and present a new hierarchical rigid constraint to reduce the ambiguity due to the joint learning of template shape and correspondences. Extensive experiments show that our model can represent shapes with large deformations. We also show that our shape representation can support two typical applications, such as texture transfer and shape editing, with competitive performance.




Illustration of our shape representation network




Paper and Code

B. Zhang, J. Li, X. Deng, Y. Zhang, C. Ma, H. Wang

Self-supervised Learning of Implicit Shape Representation with Dense Correspondence for Deformable Objects.

ICCV, 2023.

[Paper] [Code] [Bibtex]



Results

Comparison of model representation capability from our method with DIF and 3D-CODED. Our method outperforms DIF and 3D-CODED by a large margin on the representation capability
Comparison of model representation capability from our method with DIF and 3D-CODED. Our method outperforms DIF and 3D-CODED by a large margin on the representation capability
Comparison of model representation capability from our method with DIF and 3D-CODED. Our method outperforms DIF and 3D-CODED by a large margin on the representation capability



Acknowledgements

This work was supported in part by National Key R&D Program of China (2022ZD0117900). The websiteis modified from this template.