Gemma 4 31BvsGLM-5
Across 4 shared benchmarks, GLM-5 leads overall: Gemma 4 31B wins 0, GLM-5 wins 4, with 0 ties and an average score difference of -10.47.
Gemma 4 31B
DeepMind · 2026-04-02 · AI model
GLM-5
智谱AI · 2026-02-11 · AI model
Gemma 4 31B0 wins(0%)(100%)4 winsGLM-5
Benchmark scores
Grouped by capability, sorted by largest gap within each. 4 shared benchmarks.
General Knowledge
GLM-5 2/2| Benchmark | Gemma 4 31B | GLM-5 | Diff |
|---|---|---|---|
| HLE | 26.5075 / 149Thinking (With Tools + Internet) | 50.4015 / 149thinking + 使用工具 | -23.90 |
| GPQA Diamond | 84.3050 / 175Thinking (No Tools) | 8640 / 175Thinking (No Tools) | -1.70 |
Agent Level Benchmark
GLM-5 1/1| Benchmark |
|---|
Specs
| Field | Gemma 4 31B | GLM-5 |
|---|---|---|
| Publisher | DeepMind | 智谱AI |
| Release date | 2026-04-02 | 2026-02-11 |
| Model type | AI model | AI model |
| Architecture | Dense | MoE |
| Parameters | 31.0 | 7440.0 |
| Context length | 256K | 200K |
| Max output | 32768 | 131072 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | Gemma 4 31B | GLM-5 |
|---|---|---|
| Text input | Not public | $1 / 1M tokens |
| Text output | Not public | $3.2 / 1M tokens |
| Cache write | Not public | $0.2 / 1M tokens |
One or both models have incomplete public pricing.
Summary
- GLM-5leads in:General Knowledge (2/2), Agent Level Benchmark (1/1), Math and Reasoning (1/1)
On average across the 4 shared benchmarks, GLM-5 scores 10.47 higher.
Largest single-benchmark gap: HLE — Gemma 4 31B 26.50 vs GLM-5 50.40 (-23.90).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.