Codestral
Codestral is an AI model published by MistralAI, released on 2024-05-29, for Coding model, with 220.0B parameters, and 32K tokens context length, requiring about 44GB storage, under the Mistral AI Non-Production License license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Codestral currently shows benchmark results led by MBPP (12 / 28, score 78.20), HumanEval (22 / 39, score 81.10), LiveCodeBench (111 / 118, score 31.50). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
No curated comparisons for this model yet.
Want a custom combination? Open the compare tool
Codestral is an AI model published by MistralAI, released on 2024-05-29, for Coding model, with 220.0B parameters, and 32K tokens context length, requiring about 44GB storage, under the Mistral AI Non-Production License license.
Follow DataLearner on WeChat for AI model updates and research notes.
