DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
目录
Model catalogMistral-7B-Instruct-v0.3
MI

Mistral-7B-Instruct-v0.3

Mistral-7B-Instruct-v0.3

Release date: 2024-05-22更新于: 2024-05-23 08:19:44805
Live demoGitHubHugging FaceCompare
Parameters
70.0亿
Context length
4K
Chinese support
Not supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Mistral-7B-Instruct-v0.3

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
4K tokens
Max output length
No data
Model type
聊天大模型
Release date
2024-05-22
Model file size
14GB
MoE architecture
No
Total params / Active params
70.0B / N/A
Knowledge cutoff
No data
Mistral-7B-Instruct-v0.3

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
GitHub link unavailable
Hugging Face
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
Live demo
No live demo
Mistral-7B-Instruct-v0.3

Official resources

Paper
No paper available
DataLearnerAI blog
No blog post yet
Mistral-7B-Instruct-v0.3

API details

API speed
No data
No public API pricing yet.
Mistral-7B-Instruct-v0.3

Benchmark Results

Mistral-7B-Instruct-v0.3 currently shows benchmark results led by ARC (3 / 4, score 60), BBH (15 / 18, score 56.10), GSM8K (20 / 24, score 36.20). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking
All modesNormal

综合评估

4 evaluations
Benchmark / mode
Score
Rank/total
MMLU
Off
64.20
62 / 63
BBH
Off
56.10
15 / 18
MMLU Pro
Off
30.90
112 / 114
GPQA Diamond
Off
24.70
157 / 161

数学推理

2 evaluations
Benchmark / mode
Score
Rank/total
GSM8K
Off
36.20
20 / 24
MATH
Off
10.20
40 / 41

编程与软件工程

2 evaluations
Benchmark / mode
Score
Rank/total
MBPP
Off
51.10
25 / 27
HumanEval
Off
29.30
36 / 38

常识推理

1 evaluations
Benchmark / mode
Score
Rank/total
ARC
Off
60
3 / 4
View benchmark analysisCompare with other models
Mistral-7B-Instruct-v0.3

Publisher

MistralAI
MistralAI
View publisher details
Mistral-7B-Instruct-v0.3

Model Overview

MistralAI开源的70亿参数规模大语言模型Mistral-7B的v0.3版本,这是基于基座模型进行指令微调得到的。相比较v0.2版本,其主要改进包括三个:

  • 词汇表从32000扩展到32768
  • 支持v3的tokenizer
  • 支持函数调用

从上述改进看,这个版本最大的优化是对函数调用的支持。说明了模型训练过程中应该加入了类似语料,或者是微调阶段使用了相关的数据集。Mistral-7B一直是70亿参数规模大模型中非常优秀的版本。此次v0.3版本支持函数调用更是将70亿参数规模模型往前推动了一大把。

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码