DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
Page navigation
Model catalogGLM-4.5
GL

GLM-4.5

Reasoning model

GLM-4.5-MoE-355B-A32B-0715

Release date: 2025-07-28Updated: 2025-07-29 11:11:411,596
Live demoGitHubHugging FaceCompare
Parameters
355B
Context length
128K
Chinese support
Supported
Reasoning ability

GLM-4.5-MoE-355B-A32B-0715 is an AI model published by 智谱AI, released on 2025-07-28, for Reasoning model, with 3550.0B parameters, and 128K tokens context length, requiring about 710 GB storage, under the Apache 2.0 license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

GLM-4.5

Model basics

Reasoning traces
Supported
Thinking modes
Thinking modes not supported
Context length
128K tokens
Max output length
97280 tokens
Model type
Reasoning model
Release date
2025-07-28
Model file size
710 GB
MoE architecture
Yes
Total params / Active params
355B / 32B
Knowledge cutoff
No data
GLM-4.5

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
https://github.com/THUDM/GLM-4
Hugging Face
https://huggingface.co/zai-org/GLM-4.5
Live demo
https://chat.z.ai/
GLM-4.5

Official resources

Paper
GLM-4.5: Reasoning, Coding, and Agentic Abililties
DataLearnerAI blog
DataLearnerAI blog
GLM-4.5

API details

API speed
3/5
💡Default unit: $/1M tokens. If vendors use other units, follow their published pricing.
Standard pricingStandard
ModalityInputOutput
Text$0.6$2.2
GLM-4.5

Benchmark Results

GLM-4.5 currently shows benchmark results led by MATH-500 (3 / 44, score 98.20), AIME 2024 (14 / 62, score 91), MMLU Pro (30 / 124, score 84.60). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking

General Knowledge

4 evaluations
Benchmark / mode
Score
Rank/total
MMLU Pro
Thinking Mode
84.60
30 / 124
GPQA Diamond
Thinking Mode
79.10
77 / 175
LiveBench
Standard Mode
65
32 / 52
HLE
Thinking Mode
14.40
113 / 149

Coding and Software Engineer

2 evaluations
Benchmark / mode
Score
Rank/total
LiveCodeBench
Thinking Mode
72.90
44 / 118
SWE-bench Verified
Thinking Mode
64.20
66 / 103

Math and Reasoning

2 evaluations
Benchmark / mode
Score
Rank/total
MATH-500
Thinking Mode
98.20
3 / 44
AIME 2024
Thinking Mode
91
14 / 62

AI Agent - Tool Usage

1 evaluations
Benchmark / mode
Score
Rank/total
Terminal-Bench
Thinking Mode
37.50
15 / 35
View benchmark analysisCompare with other models
GLM-4.5

Publisher

智谱AI
智谱AI
View publisher details
GLM-4.5-MoE-355B-A32B-0715

Model Overview

GLM-4.5-MoE-355B-A32B-0715 is an AI model published by 智谱AI, released on 2025-07-28, for Reasoning model, with 3550.0B parameters, and 128K tokens context length, requiring about 710 GB storage, under the Apache 2.0 license.

DataLearner on WeChat

Follow DataLearner on WeChat for AI model updates and research notes.

DataLearner WeChat QR code

Compare with other models

No curated comparisons for this model yet.

Want a custom combination? Open the compare tool