You are currently viewing DeepSeek Releases V3.2 and V3.2-Speciale AI Models
Featured image for DeepSeek Releases V3.2 and V3.2-Speciale AI Models

DeepSeek Releases V3.2 and V3.2-Speciale AI Models

  • Post author:
  • Post category:News
  • Post comments:0 Comments

DeepSeek Releases V3.2 and V3.2-Speciale AI Models

Image sourced from bloomberg.com
Image sourced from bloomberg.com

DeepSeek, the Chinese AI startup from Hangzhou, launched DeepSeek-V3.2 and DeepSeek-V3.2-Speciale this week. These reasoning-focused models target agent tasks and complex problem-solving. Bloomberg says V3.2 matches OpenAI’s GPT-5 on multiple reasoning benchmarks. V3.2 replaces an experimental version from September, Investing.com reports.

Key Improvements

DeepSeek built these models with more post-training work, now over 10% of pre-training costs, up from 1% two years ago, per The Decoder. They use DeepSeek Sparse Attention to handle long text faster by focusing on key parts of past input.

The company created data from specialist models in math, coding, logic, and agents. This includes 1,800 synthetic environments and 85,000 instructions. V3.2 adds “Thinking in Tool-Use,” blending reasoning with external tools.

Benchmark Scores

V3.2 hits GPT-5 level on general tasks, according to DeepSeek and Analytics India Magazine. Specific results from The Decoder:

  • AIME 2025 math: 93.1% (vs. GPT-5 High 94.6%)
  • LiveCodeBench coding: 83.3% (vs. GPT-5 84.5%)
  • SWE Multilingual (GitHub issues): 70.2% (beats GPT-5’s 55.3%)
  • Terminal Bench 2.0: 46.4% (behind Gemini 3 Pro’s 54.2%)

V3.2-Speciale pushes harder on tough problems. Mint notes it earned gold in 2025 International Mathematical Olympiad, International Olympiad in Informatics, and leads CodeForces over GPT-5 High and Gemini 3 Pro.

From OfficeChai, Speciale scores include:

  • AIME 2025: 96.0% (beats Gemini 3 Pro 95.0%, GPT-5 High 94.6%)
  • HMMT 2025: 99.2% (beats Gemini 3 Pro 97.5%)
  • CodeForces rating: 2701 (near Gemini 3 Pro 2708)
  • Humanity’s Last Exam: 30.6% (behind Gemini 3 Pro 37.7%)

Speciale uses more tokens, like 77,000 on average for CodeForces vs. Gemini’s 22,000.

Availability and Access

V3.2 works now on DeepSeek’s website, app, and API as the default. V3.2-Speciale stays API-only until at least December 15, 2025. Both went open-source on Hugging Face under Apache 2.0. API pricing matches prior models, but Speciale skips tool calls.

DeepSeek also released DeepSeekMath-V2 recently, which hit IMO gold.

More stories at letsjustdoai.com

Seb

I love AI and automations, I enjoy seeing how it can make my life easier. I have a background in computational sciences and worked in academia, industry and as consultant. This is my journey about how I learn and use AI.

Leave a Reply