Gen-3Diffusion: Realistic Image-to-3D Generation via 2D & 3D Diffusion Synergy

1University of Tübingen, 2Tübingen AI Center,
3Max Planck Institute for Informatics, Saarland Informatics Campus

Gen-3Diffusion: reconstruct 3D Gaussian Splats with high-fidelity geometry and texture from single RGB image within 22 seconds and 11 GB GPU memory. With the efficient design, Gen-3Diffusion can perform realistic 3D generation in large scale.


Abstract

TL;DR: 2D Multi-view Diffusion Model and 3D diffusion-based Generative Model can be synchronized at diffusing and reverse sampling to provide complementary information to benefit each other.

Creating realistic 3D objects and clothed avatars from a single RGB image is an attractive yet challenging problem. Due to its ill-posed nature, recent works leverage powerful prior from 2D diffusion models pretrained on large datasets. Although 2D diffusion models demonstrate strong generalization capability, they cannot guarantee the generated multi-view images are 3D consistent.
In this paper, we propose Gen-3Diffusion: Realistic Image-to-3D Generation via 2D & 3D Diffusion Synergy. We leverage a pre-trained 2D diffusion model and a 3D diffusion model via our elegantly designed process that synchronizes two diffusion models at both training and sampling time.
The synergy between the 2D and 3D diffusion models brings two major advantages: 1) 2D helps 3D in generalization: the pretrained 2D model has strong generalization ability to unseen images, providing strong shape priors for the 3D diffusion model; 2) 3D helps 2D in multi-view consistency: the 3D diffusion model enhances the 3D consistency of 2D multi-view sampling process, resulting in more accurate multi-view generation.


Click to watch on YouTube

5 Minutes voice-over video of Gen-3Diffusion on Youtube.

Acknowledgement

We appreciate GarvitaTiwari, Zehao Yu, Chuqiao Li, Yuliang Xiu, Zhen Liu, Zeju Qiu, Siyao Li, Weiyang Liu and other colleagues for their feedback to improve the work This work was made possible by funding from the Carl Zeiss Foundation. This work is also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (EmmyNoether Programme, project: Real Virtual Humans) and the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. G. Pons-Moll is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Y.Xue. For this project, R. Marin has been supported by the innovation program under the Marie Skłodowska-Curie grant agreement No. 101109330.



Carl-Zeiss-Stiftung Tübingen AI Center University of Tübingen IMPRS mpi-inf eu

BibTeX

@article{xue2024gen3diffusion,
  author    = {Xue, Yuxuan and Xie, Xianghui and Marin, Riccardo and Pons-Moll, Gerard},
  title     = {Gen-3Diffusion: Realistic Image-to-3D Generation via 2D & 3D Diffusion Synergy},
  journal   = {Arxiv},
  year      = {2024},
}