Qwen2.5-32B
Qwen2.5-32B is an AI model published by 阿里巴巴, released on 2024-09-18, for 基础大模型, with 320.0B parameters, and 128K tokens context length, requiring about 64GB storage, under the Apache 2.0 license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Qwen2.5-32B currently shows benchmark results led by GSM8K (5 / 26, score 95.90), MATH (10 / 42, score 83.10), MBPP (8 / 28, score 84). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
阿里巴巴开源的千问大模型,是2.5代的320亿参数规模大语言模型。以Apache2.0开源协议开源,意味着可以完全免费商用,协议非常友好。
Qwen2.5-32B开源了多个不同的版本,包括基座版本和指令微调的版本:
| Qwen2.5-32B版本 | 版本简介 | HuggingFace开源地址 |
|---|---|---|
| Qwen2.5-32B | 320亿参数的基座模型 | https://huggingface.co/Qwen/Qwen2.5-32B |
| Qwen2.5-32B-Instruct | 指令微调版本 | https://huggingface.co/Qwen/Qwen2.5-32B-Instruct |
Qwen2.5-32B-Instruct-AWQ | AWQ的4bit量化版本的指令微调Qwen2.5 | https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-AWQ |
| Qwen2.5-32B-Instruct-GPTQ | GPTQ量化版本的指令微调Qwen2.5,包含不同的量化水平 | Int8: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-GPTQ-Int8 Int4: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4 |
| Qwen2.5-32B-Instruct-GGUF | GGUF量化格式 | https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-GGUF |
欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送
