DeepSeek-V4-ProvsDeepSeek V3.2
Across 8 shared benchmarks, DeepSeek-V4-Pro leads overall: DeepSeek-V4-Pro wins 5, DeepSeek V3.2 wins 3, with 0 ties and an average score difference of +102.88.
DeepSeek-V4-Pro
DeepSeek-AI · 2026-04-24 · Reasoning model
DeepSeek V3.2
DeepSeek-AI · 2025-12-01 · Reasoning model
DeepSeek-V4-Pro5 wins(63%)(38%)3 winsDeepSeek V3.2
Benchmark scores
Grouped by capability, sorted by largest gap within each. 8 shared benchmarks.
Coding and Software Engineer
DeepSeek-V4-Pro 3/4| Benchmark | DeepSeek-V4-Pro | DeepSeek V3.2 | Diff |
|---|---|---|---|
| CodeForces | 3,2062 / 16最高(无工具) | 2,38611 / 16Thinking (No Tools) | +820 |
| LiveCodeBench | 56.8073 / 118Normal (No Tools) | 83.3019 / 118Thinking (No Tools) | -26.50 |
| SWE-Bench Pro - Public | 52.1022 / 36Normal (With Tools) |
Specs
| Field | DeepSeek-V4-Pro | DeepSeek V3.2 |
|---|---|---|
| Publisher | DeepSeek-AI | DeepSeek-AI |
| Release date | 2026-04-24 | 2025-12-01 |
| Model type | Reasoning model | Reasoning model |
| Architecture | MoE | MoE |
| Parameters | 16000.0 | 6710.0 |
| Context length | 1M | 128K |
| Max output | 384000 | 8192 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | DeepSeek-V4-Pro | DeepSeek V3.2 |
|---|---|---|
| Text input | $1.74 / 1M tokens | Not public |
| Text output | $3.48 / 1M tokens | Not public |
| Cache read | $0.145 / 1M tokens | Not public |
| Cache write | $1.74 / 1M tokens | Not public |
One or both models have incomplete public pricing.
Summary
- DeepSeek-V4-Proleads in:Coding and Software Engineer (3/4), AI Agent - Information Search (1/1), AI Agent - Tool Usage (1/1)
- DeepSeek V3.2leads in:General Knowledge (2/2)
On average across the 8 shared benchmarks, DeepSeek-V4-Pro scores 102.88 higher.
Largest single-benchmark gap: CodeForces — DeepSeek-V4-Pro 3,206 vs DeepSeek V3.2 2,386 (+820).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.