InterPose: Learning to Generate Human-Object Interactions

from Large-Scale Web Videos

arXiv 2025

Yangsong Zhang1Abdul Ahad Butt1Gül Varol2Ivan Laptev1

1 Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)   2 LIGM, École des Ponts, IP Paris, Univ Gustave Eiffel, CNRS  
teaser_final.jpg

Paper GitHub Dataset

Abstract


Human motion generation has shown great advances thanks to the recent diffusion models trained on large-scale motion capture data. Most of existing works, however, currently target animation of isolated people in empty scenes. Meanwhile, synthesizing realistic human–object interactions in complex 3D scenes remains a critical challenge in computer graphics and robotics. One obstacle towards generating versatile high-fidelity human-object interactions is the lack of large-scale datasets with diverse object manipulations. Indeed, existing motion capture data is typically restricted to single people and manipulations of limited sets of objects. To address this issue, we propose an automatic motion extraction pipeline and use it to collect interaction-rich human motions. Our new dataset InterPose contains 73.8K sequences of 3D human motions and corresponding text captions automatically obtained from 45.8K videos with human-object interactions. We perform extensive experiments and demonstrate InterPose to bring significant improvements to state-of-the-art methods for human motion generation. Moreover, using InterPose we develop an LLM-based agent enabling zero-shot animation of people interacting with diverse objects and scenes.

Video


Data Collection


data_collection.jpg

Overview of data collection for the InterPose dataset. Our framework contains a module for collecting interaction-rich videos (left) and a module for automatic extraction of 3D human motions and corresponding text captions (right).


Action: open

Action: stack

Object: box

Object: box

Text: The person stands next to a motorcycle, opening a small storage compartment on the rear of motorcycle. The person retrieve an object, inspect it briefly.

Text: The person lifts a large cushion and places it on top of a stack of cardboard boxes. The person then adjusts the cushion to ensure it is securely positioned.

data1.gif
data2.gif

Qualitative Results on Spatial Controllability Tasks


[1] Xie et al. OmniControl: Control Any Joint at Any Time for Human Motion Generation. ICLR, 2024.

[2] Guo et al. Generating Diverse and Natural 3D Human Motions From Text. CVPR, 2022.

[3] Tessler et al. MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting. SIGGRAPH Asia, 2024.

[4] Mahmood et al. AMASS: Archive of motion capture as surface shapes. ICCV, 2019.

Spatial Controllability of OmniControl [1]

OmniControl [1] trained on HumanML3D [2]

OmniControl [1] trained on InterPose

ori_omni.gif
our_omni.gif

Spatial Controllability of MaskedMimic [3]

MaskedMimic [3] trained on AMASS [4]

MaskedMimic [3] trained on InterPose

ori_mask1.gif
our_mask1.gif

HOI generation in 3D scenes


User can input waypoints to enable precise control in 3D scenes.


We also propose a LLM-based framework to enable automatic planning and generation of contact waypoints.

Text instruction: Pick up the floorlamp and put it between sofa and lamp.


Text instruction: Two people pick up smalltable, and put it next to the refrigerator.

Text instruction: Pick up two clothestands, and put them next to table2.

Failure cases


Problem: planner fails to generate the correct waypoints.

Text instruction: Two person pick up largetable together, and move it to next to the sofa.

BibTeX

If you find this work useful for your research, please cite:
          @article{zhang2025interpose,
            title={InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos},
            author={Zhang, Yangsong and Butt, Abdul Ahad and Varol, G{\"u}l and Laptev, Ivan},
            journal={arXiv},
            year={2025},
          }

Acknowledgements


We thank helps from public code like chois_release, OmniControl, ProtoMotions, WHAM, hamer, Qwen2.5-VL, etc.

© This webpage was in part inspired from this template.