PhySIC: Physically Plausible 3D Human-Scene Interaction and Contact from a Single Image

Pradyumna Yalandur Muralidhar1,2*, Yuxuan Xue1,3*†, Xianghui Xie1,3,4, Margaret Kostyrko1, Gerard Pons-Moll1,3,4
1University of Tübingen 2Zuse School ELIZA 3Tübingen AI Center 4MPI for Informatics
*Equal Contribution Corresponding Author

SIGGRAPH Asia 2025

PhySIC reconstructs physically plausible 3D human-scene interactions from a single image.

Abstract

Reconstructing metrically accurate humans and their surrounding scenes from a single image is crucial for virtual reality, robotics, and comprehensive 3D scene understanding. However, existing methods struggle with depth ambiguity, occlusions, and physically inconsistent contacts.

To address these challenges, we introduce PhySIC, a unified framework for physically plausible Human–Scene Interaction and Contact reconstruction. PhySIC recovers metrically consistent SMPL-X human meshes, dense scene surfaces, and vertex-level contact maps within a shared coordinate frame, all from a single RGB image. Starting from coarse monocular depth and parametric body estimates, PhySIC performs occlusion-aware inpainting, fuses visible depth with unscaled geometry for a robust initial metric scene scaffold, and synthesizes missing support surfaces like floors. A confidence-weighted optimization subsequently refines body pose, camera parameters, and global scale by jointly enforcing depth alignment, contact priors, interpenetration avoidance, and 2D reprojection consistency. Explicit occlusion masking safeguards invisible body regions against implausible configurations.

PhySIC is highly efficient, requiring only 9 seconds for a joint human-scene optimization and less than 27 seconds for end-to-end reconstruction process. Moreover, the framework naturally handles multiple humans, enabling reconstruction of diverse human scene interactions. Empirically, PhySIC substantially outperforms single-image baselines, reducing mean per-vertex scene error from 641 mm to 227 mm, halving the pose-aligned mean per-joint position error (PA-MPJPE) to 42 mm, and improving contact F1-score from 0.09 to 0.51. Qualitative results demonstrate that PhySIC yields realistic foot-floor interactions, natural seating postures, and plausible reconstructions of heavily occluded furniture. By converting a single image into a physically plausible 3D human-scene pair, PhySIC advances accessible and scalable 3D scene understanding.

3D Reconstruction Results

Below are our 3D reconstruction results. Click on any thumbnail to view its interactive 3D model.

Result 1

Loading 3D model...

Method Overview

An optimization framework to leverage robust human and scene initializations for physically plausible, joint reconstruction

Contact Estimation Results

Our method not only reconstructs the human and scene but also refines the vertex-level contact map between them. Below, we compare our final contact estimation with the state-of-the-art predictor, DECO, which we use for initialization.

Qualitative comparison of contact estimation results.

Our joint optimization robustly improves upon initial predictions, correcting noisy estimates and capturing nuanced interactions. Notice how PhySIC produces cleaner and more accurate contact maps for intricate body parts like feet, arms, and the torso across various poses.

Acknowledgements

We thank the anonymous reviewers whose feedback helped improve this paper. This work is made possible by funding from the Carl Zeiss Foundation. This work is also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (EmmyNoether Programme, project: Real Virtual Humans) and the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting YX. PYM is supported by the Konrad Zuse School of Excellence in Learning and Intelligent Systems (ELIZA) through the DAAD programme Konrad Zuse School of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. GPM is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645.



Carl-Zeiss-Stiftung Tübingen AI Center University of Tübingen Zuse School ELIZA IMPRS mpi-inf

BibTeX

@inproceedings{ym2025physic,
  author    = {Yalandur Muralidhar, Pradyumna and Xue, Yuxuan and Xie, Xianghui and Kostyrko, Margaret and Pons-Moll, Gerard},
  title     = {PhySIC: Physically Plausible 3D Human-Scene Interaction and Contact from a Single Image},
  journal   = {SIGGRAPH Asia 2025 Conference Papers},
  year      = {2025},
}