Qwen2.5-72B
Qwen2.5-72B is an AI model published by 阿里巴巴, released on 2024-09-18, for 基础大模型, with 727.0B parameters, and 128K tokens context length, requiring about 144GB storage, under the Qwen License license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Qwen2.5-72B currently shows benchmark results led by TruthfulQA (1 / 4, score 60.40), MBPP (7 / 28, score 84.70), GSM8K (11 / 26, score 91.50). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
阿里开源的Qwen2.5系列模型中参数量最大的一个版本,720亿参数规模。其评测效果超过了MetaAI开源的同等参数规模的Llama-3-70B。产品月活低于1亿的商业应用是免费的。
720亿参数规模的Qwen2.5包含了多个版本,除了基座版本外,官方也开源了量化版本以及不同的指令微调版本,其结果如下:
| Qwen2.5-72B版本 | 版本简介 | HuggingFace开源地址 |
|---|---|---|
| Qwen2.5-72B | 720亿参数规模的基座版本 | https://huggingface.co/Qwen/Qwen2.5-72B |
| Qwen2.5-72B-Instruct | 指令微调版本 | https://huggingface.co/Qwen/Qwen2.5-72B-Instruct |
Qwen2.5-72B-Instruct-AWQ | AWQ的4bit量化版本的指令微调Qwen2.5 | https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-AWQ |
| Qwen2.5-72B-Instruct-GPTQ | GPTQ量化版本的指令微调Qwen2.5,包含不同的量化水平 | Int8: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8 Int4: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4 |
| Qwen2.5-72B-Instruct-GGUF | GGUF量化格式版本 | https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GGUF |
欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送
