GLM-5vsGLM-4.5
Across 3 shared benchmarks, GLM-5 leads overall: GLM-5 wins 3, GLM-4.5 wins 0, with 0 ties and an average score difference of +18.83.
GLM-53 wins(100%)(0%)0 winsGLM-4.5
Benchmark scores
Grouped by capability, sorted by largest gap within each. 3 shared benchmarks.
General Knowledge
GLM-5 2/2| Benchmark | GLM-5 | GLM-4.5 | Diff |
|---|---|---|---|
| HLE | 50.4015 / 149thinking + 使用工具 | 14.40113 / 149thinking | +36 |
| GPQA Diamond | 8640 / 175Thinking (No Tools) | 79.1077 / 175thinking | +6.90 |
Coding and Software Engineer
GLM-5 1/1| Benchmark |
|---|
Specs
| Field | GLM-5 | GLM-4.5 |
|---|---|---|
| Publisher | 智谱AI | 智谱AI |
| Release date | 2026-02-11 | 2025-07-28 |
| Model type | AI model | Reasoning model |
| Architecture | MoE | MoE |
| Parameters | 7440.0 | 3550.0 |
| Context length | 200K | 128K |
| Max output | 131072 | 97280 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GLM-5 | GLM-4.5 |
|---|---|---|
| Text input | $1 / 1M tokens | 0.6 美元/100 万tokens |
| Text output | $3.2 / 1M tokens | 2.2 美元/100 万tokens |
| Cache write | $0.2 / 1M tokens | Not public |
Summary
- GLM-5leads in:General Knowledge (2/2), Coding and Software Engineer (1/1)
On average across the 3 shared benchmarks, GLM-5 scores 18.83 higher.
Largest single-benchmark gap: HLE — GLM-5 50.40 vs GLM-4.5 14.40 (+36).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.