DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
目录
Model catalogQwen2.5-7B
QW

Qwen2.5-7B

基础大模型

Qwen2.5-7B

Release date: 2024-09-18更新于: 2024-09-21 11:11:051,784
Live demoGitHubHugging FaceCompare
Parameters
70.0亿
Context length
128K
Chinese support
Supported
Reasoning ability

Qwen2.5-7B is an AI model published by 阿里巴巴, released on 2024-09-18, for 基础大模型, with 70.0B parameters, and 128K tokens context length, requiring about 14GB storage, under the Apache 2.0 license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Qwen2.5-7B

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
128K tokens
Max output length
No data
Model type
基础大模型
Release date
2024-09-18
Model file size
14GB
MoE architecture
No
Total params / Active params
70.0B / N/A
Knowledge cutoff
No data
Qwen2.5-7B

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
https://github.com/QwenLM/Qwen2.5
Hugging Face
https://huggingface.co/Qwen/Qwen2.5-7B
Live demo
https://huggingface.co/spaces/Qwen/Qwen2.5
Qwen2.5-7B

Official resources

Paper
Qwen2.5-LLM: Extending the boundary of LLMs
DataLearnerAI blog
No blog post yet
Qwen2.5-7B

API details

API speed
No data
No public API pricing yet.
Qwen2.5-7B

Benchmark Results

Qwen2.5-7B currently shows benchmark results led by ARC (2 / 4, score 63.70), MBPP (14 / 28, score 74.90), GSM8K (15 / 26, score 85.40). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking
All modesNormalThinking
Tool usage
All modesWith toolsNo tools

OpenClaw智能体能力综合测评

1 evaluations
Benchmark / mode
Score
Rank/total
Pinch Bench
OnTools
40.30
37 / 37
View benchmark analysisCompare with other models
Qwen2.5-7B

Publisher

阿里巴巴
阿里巴巴
View publisher details
Qwen2.5-7B

Model Overview

        

阿里巴巴开源的千问大模型,是2.5代的70亿参数规模大语言模型。以Apache2.0开源协议开源,意味着可以完全免费商用,协议非常友好。


Qwen2.5-7B开源了多个不同的版本,包括基座版本和指令微调的版本:

Qwen2.5-7B版本版本简介HuggingFace开源地址
Qwen2.5-7B320亿参数的基座模型 https://huggingface.co/Qwen/Qwen2.5-7B 
Qwen2.5-7B-Instruct指令微调版本 https://huggingface.co/Qwen/Qwen2.5-7B-Instruct 

Qwen2.5-7B-Instruct-AWQ
AWQ的4bit量化版本的指令微调Qwen2.5 https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-AWQ 
Qwen2.5-7B-Instruct-GPTQGPTQ量化版本的指令微调Qwen2.5,包含不同的量化水平Int8:  https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8 
Int4: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4
Qwen2.5-7B-Instruct-GGUFGGUF量化格式https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GGUF


DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码