Mistral-7B-Instruct-v0.3
Mistral-7B-Instruct-v0.3 is an AI model published by MistralAI, released on 2024-05-22, for AI model, with 70.0B parameters, and 4K tokens context length, requiring about 14GB storage, under the Apache 2.0 license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Mistral-7B-Instruct-v0.3 currently shows benchmark results led by ARC (3 / 4, score 60), GSM8K (22 / 26, score 36.20), BBH (17 / 20, score 56.10). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
Mistral-7B-Instruct-v0.3 is an AI model published by MistralAI, released on 2024-05-22, for AI model, with 70.0B parameters, and 4K tokens context length, requiring about 14GB storage, under the Apache 2.0 license.
Follow DataLearner on WeChat for AI model updates and research notes.

No curated comparisons for this model yet.
Want a custom combination? Open the compare tool