DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service

AI Model Leaderboards

Live rankings across ARC-AGI-2, HLE, AIME 2025, SWE-bench Verified, and more — browse composite scores or drill into math, coding, and agent categories.

View benchmark detailsUpdated on 2026-05-02 07:14:49

As of 2026-05, AA Intelligence Index leaders include GPT-5.5 (xhigh), GPT-5.5 (high), Opus 4.7 (max), based on 10 standardized capability benchmarks.

On the user-preference side, LMArena Text Generation currently ranks Opus 4.7 (thinking), Claude Opus 4.6 (thinking), Claude Opus 4.6 near the top via anonymous A/B voting.

Scroll down for per-benchmark breakdowns in math, coding, and agent categories. See Data Methodology for scoring details, or browse LLM Blogs for in-depth commentary.

Composite Rankings

There is no single, universally agreed-upon comprehensive AI model ranking, so we selected two representative leaderboards that approach the question from different angles. Artificial Analysis Intelligence Index aggregates scores from 10 standardized benchmarks (coding, math, reasoning, etc.) to measure objective capability. LMArena (formerly Chatbot Arena) ranks models by Elo ratings derived from anonymous crowd-sourced A/B voting, reflecting real-world user preference. Together they offer both an objective and a subjective perspective.

AA Intelligence Index

Full ranking

Composite of 10 standardized benchmarks across coding, math, science, reasoning, and agentic tasks.

Updated 2026-05-10

#ModelScore
1
OpenAI
GPT-5.5 (xhigh)OpenAI
60
2
OpenAI
GPT-5.5 (high)OpenAI
59
3
Anthropic
Opus 4.7 (max)Anthropic
57
4
Google Deep Mind
Gemini 3.1 Pro PreviewGoogle Deep Mind
57
5
OpenAI
GPT-5.5 (medium)OpenAI
57
6
Moonshot AI
Kimi K2.6Moonshot AI
54
7
X
MiMo-V2.5-ProXiaomi
54
8
OpenAI
GPT-5.3 Codex (xhigh)OpenAI
54
9
xAI
Grok 4.3xAI
53
10
F
Muse SparkFacebook AI研究实验室
52
Source: Artificial Analysis

LMArena Text Generation

Full ranking

Elo ratings from anonymous crowdsourced A/B voting, reflecting real user preference for response quality.

Updated 2026-05-07

#ModelElo
1
Anthropic
Opus 4.7 (thinking)Anthropic
1503
2
Anthropic
Claude Opus 4.6 (thinking)Anthropic
1502
3
Anthropic
Claude Opus 4.6Anthropic
1498
4
Google Deep Mind
Gemini 3.1 Pro PreviewGoogle Deep Mind
1492
5
Anthropic
Opus 4.7Anthropic
1491
6
F
Muse SparkFacebook AI研究实验室
1490
7
Google Deep Mind
Gemini 3.0 Pro (Preview 11-2025)Google Deep Mind
1486
8
OpenAI
gpt-5.5-highOpenAI
1484
9
xAI
grok-4.20-beta1xAI
1480
10
OpenAI
gpt-5.2-chat-latest-20260210OpenAI
1477
Source: LMArena

Per-Benchmark Rankings

Filter by math, coding, agent, and more. Switch benchmarks below or jump into a category leaderboard for the full ranking. View all benchmarks.

Benchmark Tracks
Overall
ARC-AGI-2HLEMMLU ProOpen Benchmark Directory
Math
AIME 2025FrontierMathMATH-500Open Math Leaderboard
Coding
SWE-bench VerifiedLiveCodeBenchSWE-Bench ProOpen Coding Leaderboard
Agent
τ²-BenchTerminal Bench 2.0Aider-PolyglotOpen Agent Leaderboard
Model Size:All3B and below7B13B34B65B100B and above
Model Type:AllReasoning ModelsFoundation ModelsInstruction/Chat ModelsCoding Models
Source:AllOpen SourceClosed Source
Origin:AllChina

LLM Performance Results

Data source: DataLearnerAI
Scores shown are the best result across all evaluation modes. Click a model name for the full breakdown.
RankModelLicense
阿里巴巴
Qwen 3.6 Plus Preview
阿里巴巴
50.60——78.80—Proprietary
Anthropic
Claude Sonnet 4.5
Anthropic
33.6013.604.2082.0084.70Proprietary
MiniMaxAI
M2.1
MiniMaxAI
22.00——74.80—Free commercial
4
OpenAI
GPT-4.5
OpenAI
———38.00—Proprietary
5
DeepMind
Gemma 4 31B
DeepMind
26.50———76.90Free commercial
6
DeepSeek-AI
DeepSeek-V3.1 Terminus
DeepSeek-AI
21.70——68.4037.00Free commercial
7
DeepSeek-AI
DeepSeek-V3.1
DeepSeek-AI
15.90——66.00—Free commercial
8
智谱AI
GLM-4.7
智谱AI
42.80—2.1073.8087.40Free commercial
9
阿里巴巴
Qwen3 Max (Preview)
阿里巴巴
11.10——69.6074.00Proprietary
10
智谱AI
GLM-4.6
智谱AI
30.40—2.1068.0075.90Free commercial
11
阿里巴巴
Qwen3-235B-A22B-2507
阿里巴巴
—1.30———Free commercial
12
DeepMind
Gemma 4 26B A4B
DeepMind
17.20———68.20Free commercial
13
华为
Pangu Pro MoE
华为
—————Free commercial
14
MiniMaxAI
MiniMax M2
MiniMaxAI
12.50——69.4077.20Free commercial
15
DeepSeek-AI
DeepSeek-V3-0324
DeepSeek-AI
5.20——38.8038.80Free commercial
16
Moonshot AI
Kimi K2
Moonshot AI
4.70—0.0151.8064.30Free commercial
17
OpenAI
GPT-4.1
OpenAI
3.70——54.6054.70Proprietary
18
OpenAI
GPT-4o(2025-03-27)
OpenAI
—————Proprietary
19
DeepMind
Gemini 2.0 Pro Experimental
DeepMind
—————Proprietary
20
华为
Pangu Embedded
华为
—————Free commercial
21
阿里巴巴
Qwen3-30B-A3B-2507
阿里巴巴
9.80——22.0049.00Free commercial
22
百度
ERNIE-4.5-300B-A47B
百度
—————Free commercial
23
Anthropic
Claude 3.5 Sonnet New
Anthropic
———49.00—Proprietary
24
OpenAI
GPT-4o(2024-11-20)
OpenAI
———31.00—Proprietary
25
阿里巴巴
Qwen2.5-Max
阿里巴巴
—————Proprietary
26
DeepSeek-AI
DeepSeek-V3
DeepSeek-AI
—————Free commercial
27
xAI
Grok 2
xAI
—————Free commercial
28
智谱AI
GLM-4-9B-Chat
智谱AI
—————Free commercial
29
DeepMind
Gemini 2.0 Flash-Lite
DeepMind
—————Proprietary
30
MistralAI
Mistral-Small-3.2
MistralAI
—————Free commercial
31
Facebook AI研究实验室
Llama3.3-70B-Instruct
Facebook AI研究实验室
—————Free commercial
32
Google Deep Mind
Gemma 3 - 27B (IT)
Google Deep Mind
—————Free commercial
33
阿里巴巴
Qwen3-Next
阿里巴巴
—————Free commercial
34
MistralAI
Mixtral-8x22B-Instruct-v0.1
MistralAI
—————Free commercial
35
Facebook AI研究实验室
Llama3-70B-Instruct
Facebook AI研究实验室
—————Free commercial
36
Microsoft Azure
Phi-4-mini-instruct (3.8B)
Microsoft Azure
—————Free commercial
37
Facebook AI研究实验室
Llama3-70B
Facebook AI研究实验室
—————Free commercial
38
xAI
Grok-1.5
xAI
—————Proprietary
39
Facebook AI研究实验室
Llama3.1-8B-Instruct
Facebook AI研究实验室
—————Free commercial
40
Moonshot AI
Moonlight-16B-A3B-Instruct
Moonshot AI
—————Free commercial
41
MistralAI
Mistral-7B-Instruct-v0.3
MistralAI
—————Free commercial
42
Anthropic
Claude Mythos Preview
Anthropic
64.70——93.90—Proprietary
43
智谱AI
GLM-5
智谱AI
50.404.902.1077.8089.70Free commercial
44
Anthropic
Claude Sonnet 4.6
Anthropic
49.0058.308.3079.60—Proprietary
45
OpenAI
GPT-5.2
OpenAI
45.5054.2018.8080.0082.00Proprietary
46
xAI
Grok 4 Heavy
xAI
44.40—2.1073.50—Proprietary
47
Google Deep Mind
Gemini 3.0 Flash
Google Deep Mind
43.5033.604.2068.7090.20Proprietary
48
Google Deep Mind
Gemini 2.5 Deep Think
Google Deep Mind
34.80—10.40——Proprietary
49
Moonshot AI
Kimi K2 0905
Moonshot AI
21.70——69.20—Free commercial
50
xAI
Grok 4 Fast
xAI
20.00————Proprietary
Qwen 3.6 Plus Preview
阿里巴巴
HLE50.60
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified78.80
τ²-Bench—
Proprietary
Claude Sonnet 4.5
Anthropic
HLE33.60
ARC-AGI-213.60
FrontierMath - Tier 44.20
SWE-bench Verified82.00
τ²-Bench84.70
Proprietary
M2.1
MiniMaxAI
HLE22.00
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified74.80
τ²-Bench—
Free commercial
4
GPT-4.5
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified38.00
τ²-Bench—
Proprietary
5
Gemma 4 31B
DeepMind
HLE26.50
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench76.90
Free commercial
6
DeepSeek-V3.1 Terminus
DeepSeek-AI
HLE21.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified68.40
τ²-Bench37.00
Free commercial
7
DeepSeek-V3.1
DeepSeek-AI
HLE15.90
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified66.00
τ²-Bench—
Free commercial
8
GLM-4.7
智谱AI
HLE42.80
ARC-AGI-2—
FrontierMath - Tier 42.10
SWE-bench Verified73.80
τ²-Bench87.40
Free commercial
9
Qwen3 Max (Preview)
阿里巴巴
HLE11.10
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified69.60
τ²-Bench74.00
Proprietary
10
GLM-4.6
智谱AI
HLE30.40
ARC-AGI-2—
FrontierMath - Tier 42.10
SWE-bench Verified68.00
τ²-Bench75.90
Free commercial
11
Qwen3-235B-A22B-2507
阿里巴巴
HLE—
ARC-AGI-21.30
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
12
Gemma 4 26B A4B
DeepMind
HLE17.20
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench68.20
Free commercial
13
Pangu Pro MoE
华为
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
14
MiniMax M2
MiniMaxAI
HLE12.50
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified69.40
τ²-Bench77.20
Free commercial
15
DeepSeek-V3-0324
DeepSeek-AI
HLE5.20
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified38.80
τ²-Bench38.80
Free commercial
16
Kimi K2
Moonshot AI
HLE4.70
ARC-AGI-2—
FrontierMath - Tier 40.01
SWE-bench Verified51.80
τ²-Bench64.30
Free commercial
17
GPT-4.1
OpenAI
HLE3.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified54.60
τ²-Bench54.70
Proprietary
18
GPT-4o(2025-03-27)
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
19
Gemini 2.0 Pro Experimental
DeepMind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
20
Pangu Embedded
华为
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
21
Qwen3-30B-A3B-2507
阿里巴巴
HLE9.80
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified22.00
τ²-Bench49.00
Free commercial
22
ERNIE-4.5-300B-A47B
百度
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
23
Claude 3.5 Sonnet New
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified49.00
τ²-Bench—
Proprietary
24
GPT-4o(2024-11-20)
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified31.00
τ²-Bench—
Proprietary
25
Qwen2.5-Max
阿里巴巴
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
26
DeepSeek-V3
DeepSeek-AI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
27
Grok 2
xAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
28
GLM-4-9B-Chat
智谱AI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
29
Gemini 2.0 Flash-Lite
DeepMind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
30
Mistral-Small-3.2
MistralAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
31
Llama3.3-70B-Instruct
Facebook AI研究实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
32
Gemma 3 - 27B (IT)
Google Deep Mind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
33
Qwen3-Next
阿里巴巴
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
34
Mixtral-8x22B-Instruct-v0.1
MistralAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
35
Llama3-70B-Instruct
Facebook AI研究实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
36
Phi-4-mini-instruct (3.8B)
Microsoft Azure
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
37
Llama3-70B
Facebook AI研究实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
38
Grok-1.5
xAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
39
Llama3.1-8B-Instruct
Facebook AI研究实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
40
Moonlight-16B-A3B-Instruct
Moonshot AI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
41
Mistral-7B-Instruct-v0.3
MistralAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Free commercial
42
Claude Mythos Preview
Anthropic
HLE64.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified93.90
τ²-Bench—
Proprietary
43
GLM-5
智谱AI
HLE50.40
ARC-AGI-24.90
FrontierMath - Tier 42.10
SWE-bench Verified77.80
τ²-Bench89.70
Free commercial
44
Claude Sonnet 4.6
Anthropic
HLE49.00
ARC-AGI-258.30
FrontierMath - Tier 48.30
SWE-bench Verified79.60
τ²-Bench—
Proprietary
45
GPT-5.2
OpenAI
HLE45.50
ARC-AGI-254.20
FrontierMath - Tier 418.80
SWE-bench Verified80.00
τ²-Bench82.00
Proprietary
46
Grok 4 Heavy
xAI
HLE44.40
ARC-AGI-2—
FrontierMath - Tier 42.10
SWE-bench Verified73.50
τ²-Bench—
Proprietary
47
Gemini 3.0 Flash
Google Deep Mind
HLE43.50
ARC-AGI-233.60
FrontierMath - Tier 44.20
SWE-bench Verified68.70
τ²-Bench90.20
Proprietary
48
Gemini 2.5 Deep Think
Google Deep Mind
HLE34.80
ARC-AGI-2—
FrontierMath - Tier 410.40
SWE-bench Verified—
τ²-Bench—
Proprietary
49
Kimi K2 0905
Moonshot AI
HLE21.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified69.20
τ²-Bench—
Free commercial
50
Grok 4 Fast
xAI
HLE20.00
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
Sort by:
Showing 50 of 60 modelsView MMLU Pro benchmark page

Leaderboard FAQ

01

Where does the leaderboard data come from?

Scores are aggregated from primary sources: official model cards, technical reports, papers, vendor blog posts, and reproducible third-party evaluations. Each row links back to the underlying model detail page where the source is cited.

02

Why do scores for the same model differ across benchmarks?

Each benchmark measures a different capability — reasoning (HLE, ARC-AGI-2), math (AIME, FrontierMath), coding (SWE-bench Verified), agent use (τ²-Bench), and so on. A model tuned for one capability may perform very differently on another, which is exactly why we surface per-benchmark scores rather than a single number.

03

How often is the leaderboard updated?

Data is revalidated every 5 minutes, and new models or evaluation results are added as soon as they are published. The "Updated on" indicator at the top of the page reflects the most recent data refresh.

04

How should I read the composite ranking?

The composite view aggregates a model's standing across multiple core benchmarks. It is a useful first filter, but for production decisions you should drill into the specific benchmark closest to your workload — for example, SWE-bench Verified for coding agents, or τ²-Bench for tool-use scenarios.

05

How do I compare an open-source model with a closed API model?

Use the license filter at the top to mix open and closed models in the same view, then look at the same benchmark column for both. Beyond raw scores, consider total cost of ownership: API pricing for closed models vs. self-hosting cost for open weights.