DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
目录
Model catalogQwen2.5-72B
QW

Qwen2.5-72B

基础大模型

Qwen2.5-72B

Release date: 2024-09-18更新于: 2024-09-21 11:25:431,504
Live demoGitHubHugging FaceCompare
Parameters
727.0亿
Context length
128K
Chinese support
Supported
Reasoning ability

Qwen2.5-72B is an AI model published by 阿里巴巴, released on 2024-09-18, for 基础大模型, with 727.0B parameters, and 128K tokens context length, requiring about 144GB storage, under the Qwen License license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Qwen2.5-72B

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
128K tokens
Max output length
No data
Model type
基础大模型
Release date
2024-09-18
Model file size
144GB
MoE architecture
No
Total params / Active params
727.0B / N/A
Knowledge cutoff
No data
Qwen2.5-72B

Open source & experience

Code license
Apache 2.0
Weights license
Qwen License- 免费商用授权
GitHub repo
https://github.com/QwenLM/Qwen2.5
Hugging Face
https://huggingface.co/Qwen/Qwen2.5-72B
Live demo
No live demo
Qwen2.5-72B

Official resources

Paper
Qwen2.5-LLM: Extending the boundary of LLMs
DataLearnerAI blog
No blog post yet
Qwen2.5-72B

API details

API speed
No data
No public API pricing yet.
Qwen2.5-72B

Benchmark Results

Qwen2.5-72B currently shows benchmark results led by TruthfulQA (1 / 4, score 60.40), MBPP (7 / 28, score 84.70), GSM8K (11 / 26, score 91.50). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking
All modesNormal

综合评估

4 evaluations
Benchmark / mode
Score
Rank/total
BBH
Off
86.30
11 / 20
MMLU
Off
86.10
32 / 65
MMLU Pro
Off
58.10
99 / 117
GPQA Diamond
Off
45.90
148 / 166

数学推理

2 evaluations
Benchmark / mode
Score
Rank/total
GSM8K
Off
91.50
11 / 26
MATH
Off
62.10
29 / 42

编程与软件工程

2 evaluations
Benchmark / mode
Score
Rank/total
MBPP
Off
84.70
7 / 28
HumanEval
Off
59.10
30 / 39

真实性评估

1 evaluations
Benchmark / mode
Score
Rank/total
TruthfulQA
Off
60.40
1 / 4
View benchmark analysisCompare with other models
Qwen2.5-72B

Publisher

阿里巴巴
阿里巴巴
View publisher details
Qwen2.5-72B

Model Overview

阿里开源的Qwen2.5系列模型中参数量最大的一个版本,720亿参数规模。其评测效果超过了MetaAI开源的同等参数规模的Llama-3-70B。产品月活低于1亿的商业应用是免费的。


720亿参数规模的Qwen2.5包含了多个版本,除了基座版本外,官方也开源了量化版本以及不同的指令微调版本,其结果如下:


Qwen2.5-72B版本版本简介HuggingFace开源地址
Qwen2.5-72B720亿参数规模的基座版本 https://huggingface.co/Qwen/Qwen2.5-72B 
Qwen2.5-72B-Instruct指令微调版本 https://huggingface.co/Qwen/Qwen2.5-72B-Instruct 

Qwen2.5-72B-Instruct-AWQ
AWQ的4bit量化版本的指令微调Qwen2.5 https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-AWQ 
Qwen2.5-72B-Instruct-GPTQGPTQ量化版本的指令微调Qwen2.5,包含不同的量化水平Int8: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8
Int4: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4
Qwen2.5-72B-Instruct-GGUFGGUF量化格式版本 https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GGUF 


DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码