Reconstructing metrically accurate humans and their surrounding scenes from a single image is crucial for virtual reality, robotics, and comprehensive 3D scene understanding. However, existing methods struggle with depth ambiguity, occlusions, and physically inconsistent contacts.
To address these challenges, we introduce
PhySIC is highly efficient, requiring only 9 seconds for a joint human-scene optimization and less than 27 seconds for end-to-end reconstruction process. Moreover, the framework naturally handles multiple humans, enabling reconstruction of diverse human scene interactions. Empirically, PhySIC substantially outperforms single-image baselines, reducing mean per-vertex scene error from 641 mm to 227 mm, halving the pose-aligned mean per-joint position error (PA-MPJPE) to 42 mm, and improving contact F1-score from 0.09 to 0.51. Qualitative results demonstrate that PhySIC yields realistic foot-floor interactions, natural seating postures, and plausible reconstructions of heavily occluded furniture. By converting a single image into a physically plausible 3D human-scene pair, PhySIC advances accessible and scalable 3D scene understanding.
Below are our 3D reconstruction results. Click on any thumbnail to view its interactive 3D model.
Our method not only reconstructs the human and scene but also refines the vertex-level contact map between them. Below, we compare our final contact estimation with the state-of-the-art predictor, DECO, which we use for initialization.
Our joint optimization robustly improves upon initial predictions, correcting noisy estimates and capturing nuanced interactions. Notice how PhySIC produces cleaner and more accurate contact maps for intricate body parts like feet, arms, and the torso across various poses.
We thank the anonymous reviewers whose feedback helped improve this paper. This work is made possible by funding from the Carl Zeiss Foundation. This work is also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (EmmyNoether Programme, project: Real Virtual Humans) and the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting YX. PYM is supported by the Konrad Zuse School of Excellence in Learning and Intelligent Systems (ELIZA) through the DAAD programme Konrad Zuse School of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. GPM is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645.
@inproceedings{ym2025physic,
author = {Yalandur Muralidhar, Pradyumna and Xue, Yuxuan and Xie, Xianghui and Kostyrko, Margaret and Pons-Moll, Gerard},
title = {PhySIC: Physically Plausible 3D Human-Scene Interaction and Contact from a Single Image},
journal = {SIGGRAPH Asia 2025 Conference Papers},
year = {2025},
}