Tencent has introduced the open generative AI model Hunyuan World Model 1.0, which creates three-dimensional virtual scenes based on text or visual prompts. This model is compatible with standard graphics platforms, including game engines, VR environments, and simulators. The company emphasizes that the new tool helps creators quickly transition from idea to finished 3D content without the limitations of closed solutions.
We're thrilled to release & open-source Hunyuan3D World Model 1.0! This model enables you to generate immersive, explorable, and interactive 3D worlds from just a sentence or an image.
— Hunyuan (@TencentHunyuan) July 27, 2025
It's the industry's first open-source 3D world generation model, compatible with CG pipelines… pic.twitter.com/CpETdVO7vW
Hunyuan World Model 1.0 allows for automatic separation of objects in a scene, enabling users to individually move or edit elements such as cars, trees, or furniture. The sky is highlighted separately, which can be used as a dynamic light source for realistic rendering and interactive effects. The model supports “text-to-world” and “image-to-world” formats, and the finished scenes are exported as 3D meshes for further work.
The generated scenes appear as interactive 360-degree panoramas, where one can look around and partially move. Full freedom of movement in the 3D environment is limited, but for more complex scenarios, an additional Voyager module is available. The system combines panorama generation with multi-level 3D reconstruction, making it useful for designers, developers, and VR projects.
Hunyuan World Model 1.0 works with various scene styles and supports compression and acceleration technologies for use in web and VR applications. The open distribution of the model is organized through GitHub and Hugging Face, and an interactive demo version can be explored on the sceneTo3D platform.