Diffgrasp: Whole-Body Grasping Synthesis
Guided by Object Motion Using a Diffusion Model

Yonghao Zhang1,2*
Qiang He1,2*
Yanguang Wan1,2
Cuixia Ma1,2†
Hongan Wang1,2



DiffGrasp generates whole-body human grasp sequence with realistic finger-object contact, conditioned on 3D object shape and object motion sequence.



Abstract

Generating high-quality whole-body human object interaction motion sequences is becoming increasingly important in various fields such as animation, VR/AR, and robotics. The main challenge of this task lies in determining the level of involvement of each hand given the complex shapes of objects in different sizes and their different motion trajectories, while ensuring strong grasping realism and guaranteeing the coordination of movement in all body parts. Contrasting with existing work, which either generates human interaction motion sequences without detailed hand grasping poses or only models a static grasping pose, we propose a simple yet effective framework that jointly models the relationship between the body, hands, and the given object motion sequences within a single diffusion model. To guide our network in perceiving the object's spatial position and learning more natural grasping poses, we introduce novel contact-aware losses and incorporate a data-driven, carefully designed guidance. Experimental results demonstrate that our approach outperforms the state-of-the-art method and generates plausible whole-body motion sequences.




DiffGrasp Framework

In our conditional diffusion model, we use the given sequence of object motion, object shape and the SMPL-X identity as conditions. After specially designed positional encodings, these embedded conditions are inputted into a transformer-encoder-based condition encoder. Then, a transformer decoder as denoising network predicts a sequence of clean whole-body pose of SMPL-X as well as the wrist joints translations relative to the object centroid. During the inference stage, we reconstruct the SMPL-X pose sequence into a human mesh sequence. Based on carefully designed guidance functions, we control and optimize our predicted results for more stable hand grasping, less penetration and better foot-floor contact through reconstruction guidance strategy.



Paper and Code

Y. Zhang, Q. He, Y. Wan, Y. Zhang, X. Deng, C. Ma, H. Wang

DiffGrasp: Whole-Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model.

AAAI, 2025.

[Paper]     [Bibtex]     [Code (Coming Soon)]    



Results

Object motion sequence.
Results generated by Diffgrasp.




Acknowledgements

This work was supported in part by National Science and Technology Major Project (2022ZD0117904), National Natural Science Foundation of China (62473356,62373061), Beijing Natural Science Foundation (L232028), CAS Major Project (RCJJ-145-24-14), and Beijing Hospitals Authority Clinical Medicine Development of Special Funding Support No. ZLRK202330.
The website is modified from this template.