GLM-5
GLM-5 is an AI model published by 智谱AI, released on 2026-02-11, for AI model, with 7440.0B parameters, and 200K tokens context length, requiring about 1.51TB storage, under the MIT License license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
| Type | Condition | Input | Output |
|---|---|---|---|
| Text | - | $1.00/ 1M | $3.20/ 1M |
| Type | TTL | Write | Read |
|---|---|---|---|
| Text | 5m | $0.200/ 1M | - |
GLM-5 currently shows benchmark results led by τ²-Bench (4 / 40, score 89.70), HLE (15 / 149, score 50.40), τ²-Bench - Telecom (5 / 35, score 98). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
GLM-5 is an AI model published by 智谱AI, released on 2026-02-11, for AI model, with 7440.0B parameters, and 200K tokens context length, requiring about 1.51TB storage, under the MIT License license.
Follow DataLearner on WeChat for AI model updates and research notes.
