DeepSeek V3.2 Official Release: Reasoning-First Powerhouse with Integrated Tool-Thinking for Unmatched Agentic AI
DeepSeek AI launched the official DeepSeek-V3.2 on December 1, 2025 — a frontier large language model series optimized for superior reasoning and agent performance, now available via API, web, app, and Hugging Face. Introducing DeepSeek Sparse Attention for long-context efficiency, a scalable RL framework rivaling GPT-5, and a groundbreaking "Thinking in Tool-Use" mode that fuses step-by-step deliberation with external tool calls, V3.2 and its high-compute V3.2-Speciale variant achieve gold-medal IMO/IOI scores while slashing inference costs. This release marks DeepSeek's boldest agentic leap, empowering developers with seamless hybrid workflows.

DeepSeek's Powerhouse Return: Open-Sourcing DeepSeek-Math-V2, the IMO Gold-Medal Math Model Crushing Theorems with Self-Verification
DeepSeek AI roared back on November 27, 2025, with DeepSeek-Math-V2 — a groundbreaking open-source math reasoning model built on DeepSeek-V3.2-Exp-Base, achieving gold-level scores on IMO 2025 and CMO 2024, plus a near-perfect 118/120 on Putnam 2024 via scaled test-time compute. Featuring a dual verifier-generator architecture for self-verifiable proofs, it outpaces Claude 4 and Gemini on IMO-ProofBench, emphasizing rigorous step-by-step logic over mere answers. Weights dropped on Hugging Face under Apache 2.0, democratizing Olympiad-grade AI for researchers and educators worldwide.





