MiniMax-M2.7vsGLM-5
Across 9 shared benchmarks, MiniMax-M2.7 leads overall: MiniMax-M2.7 wins 5, GLM-5 wins 3, with 1 ties and an average score difference of -2.63.
MiniMax-M2.7
MiniMaxAI · 2026-03-18 · Reasoning model
GLM-5
智谱AI · 2026-02-11 · AI model
MiniMax-M2.75 wins(56%)Ties1(33%)3 winsGLM-5
Benchmark scores
Grouped by capability, sorted by largest gap within each. 9 shared benchmarks.
Agent Level Benchmark
GLM-5 2/2| Benchmark | MiniMax-M2.7 | GLM-5 | Diff |
|---|---|---|---|
| τ²-Bench - Telecom | 8524 / 35Thinking (With Tools) | 985 / 35thinking + 使用工具 | -13 |
| Terminal Bench Hard | 395 / 13Thinking (With Tools) | 432 / 13thinking + 使用工具 | -4 |
Claw-style Agent Evaluation
MiniMax-M2.7 1/2| Benchmark |
|---|
Specs
| Field | MiniMax-M2.7 | GLM-5 |
|---|---|---|
| Publisher | MiniMaxAI | 智谱AI |
| Release date | 2026-03-18 | 2026-02-11 |
| Model type | Reasoning model | AI model |
| Architecture | MoE | MoE |
| Parameters | 2290.0 | 7440.0 |
| Context length | 200K | 200K |
| Max output | 204800 | 131072 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | MiniMax-M2.7 | GLM-5 |
|---|---|---|
| Text input | $0.3 / 1M tokens | $1 / 1M tokens |
| Text output | $1.2 / 1M tokens | $3.2 / 1M tokens |
| Cache read | $0.06 / 1M tokens | Not public |
| Cache write | $0.375 / 1M tokens | $0.2 / 1M tokens |
Summary
- MiniMax-M2.7leads in:Claw-style Agent Evaluation (1/2), Instruction Following (1/1), Long Context (1/1), Productivity Knowledge (1/1)
- GLM-5leads in:Agent Level Benchmark (2/2)
- Tied in:General Knowledge
On average across the 9 shared benchmarks, GLM-5 scores 2.63 higher.
Largest single-benchmark gap: HLE — MiniMax-M2.7 28 vs GLM-5 50.40 (-22.40).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.