DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
Page navigation
Model catalogGLM-5
GL

GLM-5

AI model

GLM-5

Release date: 2026-02-11Updated: 2026-03-27 20:24:14.4037,105
Live demoGitHubHugging FaceCompare
Parameters
744B
Context length
200K
Chinese support
Supported
Reasoning ability

GLM-5 is an AI model published by 智谱AI, released on 2026-02-11, for AI model, with 7440.0B parameters, and 200K tokens context length, requiring about 1.51TB storage, under the MIT License license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

GLM-5

Model basics

Reasoning traces
Supported
Thinking modes
Standard ModeThinking Level · Extended
Context length
200K tokens
Max output length
131072 tokens
Model type
AI model
Release date
2026-02-11
Model file size
1.51TB
MoE architecture
Yes
Total params / Active params
744B / 40B
Knowledge cutoff
No data
GLM-5

Open source & experience

Code license
Apache 2.0
Weights license
MIT License- 免费商用授权
GitHub repo
https://github.com/zai-org/GLM-5
Hugging Face
https://huggingface.co/zai-org/GLM-5
Live demo
https://chat.z.ai/
GLM-5

Official resources

Paper
GLM-5: From Vibe Coding to Agentic Engineering
DataLearnerAI blog
No blog post yet
GLM-5

API details

API speed
3/5
💡Default unit: $/1M tokens. If vendors use other units, follow their published pricing.
Learn about pricing modes
Standard
TypeConditionInputOutput
Text-$1.00/ 1M$3.20/ 1M
Cache PricingPrompt Cache
TypeTTLWriteRead
Text5m$0.200/ 1M-
GLM-5

Benchmark Results

GLM-5 currently shows benchmark results led by τ²-Bench (4 / 40, score 89.70), HLE (15 / 149, score 50.40), τ²-Bench - Telecom (5 / 35, score 98). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking
Tool usage

General Knowledge

5 evaluations
Benchmark / mode
Score
Rank/total
GPQA Diamond
Thinking Mode
86
40 / 175
HLE
50.40
15 / 149
HLE
Thinking Mode
30.50
66 / 149
ARC-AGI
Thinking Mode
44.70
44 / 65
ARC-AGI-2
Thinking Mode
4.90
43 / 58

Coding and Software Engineer

1 evaluations
Benchmark / mode
Score
Rank/total
SWE-bench Verified
Thinking Mode
77.80
18 / 103

Agent Level Benchmark

3 evaluations
Benchmark / mode
Score
Rank/total
τ²-Bench - Telecom
98
5 / 35
τ²-Bench
89.70
4 / 40
Terminal Bench Hard
43
2 / 13

Math and Reasoning

3 evaluations
Benchmark / mode
Score
Rank/total
AIME 2026
Thinking Mode
92.70
7 / 14
IMO-AnswerBench
Thinking Mode
82.50
11 / 17
FrontierMath - Tier 4
Standard Mode
2.10
56 / 80

Instruction Following

1 evaluations
Benchmark / mode
Score
Rank/total
IF Bench
72
8 / 27

AI Agent - Information Search

2 evaluations
Benchmark / mode
Score
Rank/total
BrowseComp
75.90
17 / 43
BrowseComp
Thinking Mode
62
24 / 43

AI Agent - Tool Usage

1 evaluations
Benchmark / mode
Score
Rank/total
Terminal Bench 2.0
61.10
15 / 43

Productivity Knowledge

1 evaluations
Benchmark / mode
Score
Rank/total
GDPval-AA
Thinking Mode
46
13 / 20

Long Context

1 evaluations
Benchmark / mode
Score
Rank/total
AA-LCR
Thinking Mode
63
12 / 13

Claw-style Agent Evaluation

2 evaluations
Benchmark / mode
Score
Rank/total
Claw Bench
Thinking ModeTools
91.70
5 / 29
Pinch Bench
Thinking ModeTools
86.40
12 / 37
View benchmark analysisCompare with other models

Compare with other models

  • Peer modelGLM-5 vs Kimi K2.514 benchmarks
  • Peer modelGLM-5 vs MiniMax M2.513 benchmarks
  • Earlier versionGLM-5 vs GLM-4.79 benchmarks
  • Earlier versionGLM-5 vs GLM-4.68 benchmarks
  • Earlier versionGLM-5 vs GLM-4.53 benchmarks

Want a custom combination? Open the compare tool

GLM-5

Publisher

智谱AI
智谱AI
View publisher details
GLM-5

Model Overview

GLM-5 is an AI model published by 智谱AI, released on 2026-02-11, for AI model, with 7440.0B parameters, and 200K tokens context length, requiring about 1.51TB storage, under the MIT License license.

DataLearner on WeChat

Follow DataLearner on WeChat for AI model updates and research notes.

DataLearner WeChat QR code