DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tool Directory

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

产品

  • Leaderboards
  • 模型对比
  • Datasets

资源

  • Tutorials
  • Editorial
  • Tool directory

关于

  • 关于我们
  • 隐私政策
  • 数据收集方法
  • 联系我们

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

隐私政策服务条款
Page navigation
目录
Model catalogQwen2-1.5B
QW

Qwen2-1.5B

Qwen2-1.5B

Release date: 2024-06-07更新于: 2024-06-09 21:31:23605
Live demoGitHubHugging FaceCompare
Parameters
15.0亿
Context length
32K
Chinese support
Supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Qwen2-1.5B

Model basics

Reasoning traces
Not supported
Context length
32K tokens
Max output length
No data
Model type
基础大模型
Release date
2024-06-07
Model file size
3.09GB
MoE architecture
No
Total params / Active params
15.0B / N/A
Knowledge cutoff
No data
Inference modes
No mode data
Qwen2-1.5B

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
https://github.com/QwenLM/Qwen2
Hugging Face
https://huggingface.co/Qwen/Qwen2-1.5B
Live demo
https://huggingface.co/spaces/Qwen/Qwen2-1.5b-instruct-demo
Qwen2-1.5B

Official resources

Paper
Hello Qwen2
DataLearnerAI blog
阿里巴巴开源第二代大语言模型Qwen2系列,最高参数规模700亿,评测结果位列开源模型第一,超过了Meta开源的Llama3-70B!
Qwen2-1.5B

API details

API speed
No data
No public API pricing yet.
Qwen2-1.5B

Benchmark Results

No benchmark data to show.
Qwen2-1.5B

Publisher

阿里巴巴
阿里巴巴
View publisher details
Qwen2-1.5B

Model Overview

阿里巴巴开源的15亿参数规模的大语言模型,是小规模参数语言模型中表现最强的一个。与其它小规模参数模型相比,该模型在不同评测结果上都取得了非常好的结果。下图是该模型与其它模型的对比结果:


DatasetsPhi-2Gemma-2BMiniCPMQwen1.5-1.8BQwen2-0.5BQwen2-1.5B
#Non-Emb Params2.5B2.0B2.4B1.3B0.35B1.3B
MMLU52.742.353.546.845.456.5
MMLU-Pro-15.9--14.721.8
Theorem QA----8.915.0
HumanEval47.622.050.020.122.031.1
MBPP55.029.247.318.022.037.4
GSM8K57.217.753.838.436.558.5
MATH3.511.810.210.110.721.7
BBH43.435.236.924.228.437.2
HellaSwag73.171.468.361.449.366.6
Winogrande74.466.8-60.356.866.2
ARC-C61.148.5-37.931.543.9
TruthfulQA44.533.1-39.439.745.9
C-Eval23.428.051.159.758.270.6
CMMLU24.2-51.157.855.170.3


DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码