Tencent Launches Hunyuan World Model 1.5: China's First Real-Time Interactive 3D Generation Platform — Open-Sourcing the Full Pipeline to Democratize Next-Gen Worlds

Tencent officially released Hunyuan World Model 1.5 on December 16, 2025 — the domestic industry's first real-time interactive 3D generation platform capable of streaming 720P video at 24 FPS from text or image prompts. Powered by a native 3D diffusion backbone and DiT-accelerated inference, it enables instant world building, avatar navigation, and object interaction in browser-based sessions. Most explosively, Tencent open-sourced the complete training framework, datasets, and inference code on GitHub, slashing entry barriers for developers and positioning China at the forefront of interactive 3D AI.

Moore Threads' LiteGS Takes Silver at SIGGRAPH Asia 2025: China's Homegrown 3DGS Tech Accelerates Training 10x While Open-Sourcing the Future of Reconstruction

On December 17, 2025, Moore Threads stunned the graphics world by clinching silver in the 3D Gaussian Splatting Reconstruction Challenge at SIGGRAPH Asia 2025 in Hong Kong. Their self-developed LiteGS framework — a full-stack co-optimized powerhouse — delivers up to 10.8x faster training with half the parameters compared to baselines, while lightweight variants hit equivalent quality using just 10% training time and 20% params. Fully open-sourced on GitHub today, LiteGS is igniting global collaboration in real-time 3D reconstruction for AR/VR, robotics, and beyond.

ByteDance Unveils Depth Anything 3: The Transformer That Reconstructs 3D Worlds from Any Views — SOTA Geometry Without the Hassle

ByteDance's Seed Team launched Depth Anything 3 (DA3) on November 14, 2025 — a groundbreaking visual spatial reconstruction model that fuses arbitrary images into consistent 3D geometry using a single plain transformer and depth-ray prediction. Open-sourced on GitHub with three model series (Giant for any-view, Metric for scale-aware, Nested for metric fusion), it crushes VGGT by 35.7% in pose accuracy and 23.6% in reconstruction, while matching Depth Anything 2's monocular detail. From robotics to VR, DA3's one-pass inference slashes complexity, powering Blender addons and Hugging Face demos — a minimalism masterstroke in 3D perception.

Telegram
Telegram
WhatsApp
WhatsApp