Tencent Unleashes Hunyuan 2.0: 406B MoE Powerhouse with Industry-Leading Reasoning Efficiency and 256K Context Mastery

Tencent officially launched Hunyuan 2.0 (Tencent HY 2.0) on December 5, 2025 — featuring dual variants: HY 2.0 Think for deep reasoning and HY 2.0 Instruct for rapid responses. Built on a massive 406B-parameter MoE architecture (32B active), it supports 256K context windows while delivering top-tier inference speed and efficiency. Already live in Yuanbao and ima apps, with Tencent Cloud APIs open, early benchmarks show massive gains in math, science, coding, and long-context tasks — positioning it as a domestic frontrunner against global giants.

Mistral AI Unleashes Mistral 3: The Apache 2.0 Open-Source Powerhouse Family Crushing Proprietary Giants with Edge-to-Frontier Multimodal Might

Mistral AI launched the Mistral 3 series on December 2, 2025 — a blockbuster family of 10 fully open-weight multimodal models under the permissive Apache 2.0 license, spanning Ministral 3 (3B/8B/14B dense variants in base, instruct, and reasoning flavors) to the beastly Mistral Large 3 (675B total params MoE with 41B active). Optimized for everything from drones to datacenters, these models nail image understanding, non-English prowess, and SOTA efficiency, debuting at #2 on LMSYS Arena OSS non-reasoning while slashing token output by 10x in real-world chats. This full-line return to unrestricted commercial openness is a direct gut punch to closed ecosystems like OpenAI and Google.

Telegram
Telegram
WhatsApp
WhatsApp