DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
Page navigation
Model catalogCodestral
CO

Codestral

Coding model

Codestral

Release date: 2024-05-29Updated: 2024-05-30 15:12:07731
Live demoGitHubHugging FaceCompare
Parameters
22B
Context length
32K
Chinese support
Not supported
Reasoning ability

Codestral is an AI model published by MistralAI, released on 2024-05-29, for Coding model, with 220.0B parameters, and 32K tokens context length, requiring about 44GB storage, under the Mistral AI Non-Production License license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Codestral

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
32K tokens
Max output length
No data
Model type
Coding model
Release date
2024-05-29
Model file size
44GB
MoE architecture
No
Total params / Active params
22B / N/A
Knowledge cutoff
No data
Codestral

Open source & experience

Code license
Mistral AI Non-Production License
Weights license
Mistral AI Non-Production License- 不可以商用
GitHub repo
GitHub link unavailable
Hugging Face
https://huggingface.co/mistralai/Codestral-22B-v0.1
Live demo
No live demo
Codestral

Official resources

Paper
Codestral: Hello, World! Empowering developers and democratising coding with Mistral AI.
DataLearnerAI blog
No blog post yet
Codestral

API details

API speed
No data
No public API pricing yet.
Codestral

Benchmark Results

Codestral currently shows benchmark results led by MBPP (12 / 28, score 78.20), HumanEval (22 / 39, score 81.10), LiveCodeBench (111 / 118, score 31.50). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking

Coding and Software Engineer

3 evaluations
Benchmark / mode
Score
Rank/total
HumanEval
Standard Mode
81.10
22 / 39
MBPP
Standard Mode
78.20
12 / 28
LiveCodeBench
Standard Mode
31.50
111 / 118
View benchmark analysisCompare with other models

Compare with other models

No curated comparisons for this model yet.

Want a custom combination? Open the compare tool

Codestral

Publisher

MistralAI
MistralAI
View publisher details
Codestral

Model Overview

Codestral is an AI model published by MistralAI, released on 2024-05-29, for Coding model, with 220.0B parameters, and 32K tokens context length, requiring about 44GB storage, under the Mistral AI Non-Production License license.

DataLearner on WeChat

Follow DataLearner on WeChat for AI model updates and research notes.

DataLearner WeChat QR code