Qwen2.5-7B
Qwen2.5-7B is an AI model published by 阿里巴巴, released on 2024-09-18, for 基础大模型, with 70.0B parameters, and 128K tokens context length, requiring about 14GB storage, under the Apache 2.0 license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Qwen2.5-7B currently shows benchmark results led by ARC (2 / 4, score 63.70), MBPP (14 / 28, score 74.90), GSM8K (15 / 26, score 85.40). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
阿里巴巴开源的千问大模型,是2.5代的70亿参数规模大语言模型。以Apache2.0开源协议开源,意味着可以完全免费商用,协议非常友好。
Qwen2.5-7B开源了多个不同的版本,包括基座版本和指令微调的版本:
| Qwen2.5-7B版本 | 版本简介 | HuggingFace开源地址 |
|---|---|---|
| Qwen2.5-7B | 320亿参数的基座模型 | https://huggingface.co/Qwen/Qwen2.5-7B |
| Qwen2.5-7B-Instruct | 指令微调版本 | https://huggingface.co/Qwen/Qwen2.5-7B-Instruct |
Qwen2.5-7B-Instruct-AWQ | AWQ的4bit量化版本的指令微调Qwen2.5 | https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-AWQ |
| Qwen2.5-7B-Instruct-GPTQ | GPTQ量化版本的指令微调Qwen2.5,包含不同的量化水平 | Int8: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8 Int4: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4 |
| Qwen2.5-7B-Instruct-GGUF | GGUF量化格式 | https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GGUF |
欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送
