Gemma 4 31BvsKimi K2.5
Across 5 shared benchmarks, Kimi K2.5 leads overall: Gemma 4 31B wins 1, Kimi K2.5 wins 4, with 0 ties and an average score difference of -5.72.
Gemma 4 31B
DeepMind · 2026-04-02 · AI model
Kimi K2.5
Moonshot AI · 2026-01-27 · Multimodal model
Gemma 4 31B1 win(20%)(80%)4 winsKimi K2.5
Benchmark scores
Grouped by capability, sorted by largest gap within each. 5 shared benchmarks.
General Knowledge
Kimi K2.5 2/3| Benchmark | Gemma 4 31B | Kimi K2.5 | Diff |
|---|---|---|---|
| HLE | 26.5075 / 149Thinking (With Tools + Internet) | 50.2017 / 149Thinking (With Tools) | -23.70 |
| MMLU Pro | 85.2021 / 124Thinking (No Tools) | 78.5064 / 124Thinking (No Tools) | +6.70 |
| GPQA Diamond | 84.3050 / 175Thinking (No Tools) | 87.60 |
Specs
| Field | Gemma 4 31B | Kimi K2.5 |
|---|---|---|
| Publisher | DeepMind | Moonshot AI |
| Release date | 2026-04-02 | 2026-01-27 |
| Model type | AI model | Multimodal model |
| Architecture | Dense | MoE |
| Parameters | 31.0 | 10000.0 |
| Context length | 256K | 256K |
| Max output | 32768 | 16384 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | Gemma 4 31B | Kimi K2.5 |
|---|---|---|
| Text input | Not public | 0.6 美元/100 万tokens |
| Text output | Not public | 3 美元/100 万tokens |
| Cache read | Not public | 0.1 美元/100 万tokens |
One or both models have incomplete public pricing.
Summary
- Kimi K2.5leads in:General Knowledge (2/3), Coding and Software Engineer (1/1), Math and Reasoning (1/1)
On average across the 5 shared benchmarks, Kimi K2.5 scores 5.72 higher.
Largest single-benchmark gap: HLE — Gemma 4 31B 26.50 vs Kimi K2.5 50.20 (-23.70).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.