DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service

AI Model Leaderboards

Live rankings across ARC-AGI-2, HLE, AIME 2025, SWE-bench Verified, and more — browse composite scores or drill into math, coding, and agent categories.

View benchmark detailsUpdated on 2026-05-02 07:14:49

As of 2026-05, AA Intelligence Index leaders include GPT-5.5 (xhigh), GPT-5.5 (high), Opus 4.7 (max), based on 10 standardized capability benchmarks.

On the user-preference side, LMArena Text Generation currently ranks Opus 4.7 (thinking), Claude Opus 4.6 (thinking), Claude Opus 4.6 near the top via anonymous A/B voting.

Scroll down for per-benchmark breakdowns in math, coding, and agent categories. See Data Methodology for scoring details, or browse LLM Blogs for in-depth commentary.

Composite Rankings

There is no single, universally agreed-upon comprehensive AI model ranking, so we selected two representative leaderboards that approach the question from different angles. Artificial Analysis Intelligence Index aggregates scores from 10 standardized benchmarks (coding, math, reasoning, etc.) to measure objective capability. LMArena (formerly Chatbot Arena) ranks models by Elo ratings derived from anonymous crowd-sourced A/B voting, reflecting real-world user preference. Together they offer both an objective and a subjective perspective.

AA Intelligence Index

Full ranking

Composite of 10 standardized benchmarks across coding, math, science, reasoning, and agentic tasks.

Updated 2026-05-10

#ModelScore
1
OpenAI
GPT-5.5 (xhigh)OpenAI
60
2
OpenAI
GPT-5.5 (high)OpenAI
59
3
Anthropic
Opus 4.7 (max)Anthropic
57
4
Google Deep Mind
Gemini 3.1 Pro PreviewGoogle Deep Mind
57
5
OpenAI
GPT-5.5 (medium)OpenAI
57
6
Moonshot AI
Kimi K2.6Moonshot AI
54
7
X
MiMo-V2.5-ProXiaomi
54
8
OpenAI
GPT-5.3 Codex (xhigh)OpenAI
54
9
xAI
Grok 4.3xAI
53
10
F
Muse SparkFacebook AI研究实验室
52
Source: Artificial Analysis

LMArena Text Generation

Full ranking

Elo ratings from anonymous crowdsourced A/B voting, reflecting real user preference for response quality.

Updated 2026-05-07

#ModelElo
1
Anthropic
Opus 4.7 (thinking)Anthropic
1503
2
Anthropic
Claude Opus 4.6 (thinking)Anthropic
1502
3
Anthropic
Claude Opus 4.6Anthropic
1498
4
Google Deep Mind
Gemini 3.1 Pro PreviewGoogle Deep Mind
1492
5
Anthropic
Opus 4.7Anthropic
1491
6
F
Muse SparkFacebook AI研究实验室
1490
7
Google Deep Mind
Gemini 3.0 Pro (Preview 11-2025)Google Deep Mind
1486
8
OpenAI
gpt-5.5-highOpenAI
1484
9
xAI
grok-4.20-beta1xAI
1480
10
OpenAI
gpt-5.2-chat-latest-20260210OpenAI
1477
Source: LMArena

Per-Benchmark Rankings

Filter by math, coding, agent, and more. Switch benchmarks below or jump into a category leaderboard for the full ranking. View all benchmarks.

Benchmark Tracks
Overall
ARC-AGI-2HLEMMLU ProOpen Benchmark Directory
Math
AIME 2025FrontierMathMATH-500Open Math Leaderboard
Coding
SWE-bench VerifiedLiveCodeBenchSWE-Bench ProOpen Coding Leaderboard
Agent
τ²-BenchTerminal Bench 2.0Aider-PolyglotOpen Agent Leaderboard
Model Size:All3B and below7B13B34B65B100B and above
Model Type:AllReasoning ModelsFoundation ModelsInstruction/Chat ModelsCoding Models
Source:AllOpen SourceClosed Source
Origin:AllChina

LLM Performance Results

Data source: DataLearnerAI
Scores shown are the best result across all evaluation modes. Click a model name for the full breakdown.
RankModelLicense
OpenAI
OpenAI o1
OpenAI
9.10——48.90—Proprietary
Google Deep Mind
Gemini 3.0 Pro (Preview 11-2025)
Google Deep Mind
45.8045.1018.8076.2085.40Proprietary
Anthropic
Opus 4.5
Anthropic
43.2037.604.2080.9081.99Proprietary
4
阿里巴巴
Qwen 3.6 Plus Preview
阿里巴巴
50.60——78.80—Proprietary
5
Anthropic
Claude Sonnet 4.5
Anthropic
33.6013.604.2082.0084.70Proprietary
6
Anthropic
Opus 4.1
Anthropic
——4.2074.50—Proprietary
7
腾讯AI实验室
Hunyuan-T1
腾讯AI实验室
—————Proprietary
8
xAI
Grok 4
xAI
38.6015.902.1058.60—Proprietary
9
OpenAI
GPT-4.5
OpenAI
———38.00—Proprietary
10
Google Deep Mind
Gemini 2.5-Pro
Google Deep Mind
21.604.902.1067.20—Proprietary
11
阿里巴巴
Qwen3-Max-Thinking
阿里巴巴
49.80——75.3082.10Proprietary
12
OpenAI
OpenAI o3
OpenAI
20.326.502.1069.10—Proprietary
13
xAI
Grok 4.1 Fast
xAI
17.60———82.71Proprietary
14
Anthropic
Claude Opus 4
Anthropic
10.708.604.2072.5072.50Proprietary
15
Anthropic
Claude Sonnet 4
Anthropic
9.605.90—80.2052.00Proprietary
16
阿里巴巴
Qwen3 Max (Preview)
阿里巴巴
11.10——69.6074.00Proprietary
17
OpenAI
OpenAI o4 - mini
OpenAI
17.70—6.3068.1056.90Proprietary
18
OpenAI
GPT-4.1
OpenAI
3.70——54.6054.70Proprietary
19
OpenAI
OpenAI o1-mini
OpenAI
—————Proprietary
20
Anthropic
Haiku 4.5
Anthropic
9.704.502.1073.3033.00Proprietary
21
OpenAI
GPT-4o(2025-03-27)
OpenAI
—————Proprietary
22
DeepMind
Gemini 2.0 Pro Experimental
DeepMind
—————Proprietary
23
腾讯AI实验室
Hunyuan-TurboS
腾讯AI实验室
—————Proprietary
24
OpenAI
GPT-5-mini
OpenAI
5.00—6.30——Proprietary
25
Anthropic
Claude 3.5 Sonnet New
Anthropic
———49.00—Proprietary
26
OpenAI
GPT-4o
OpenAI
5.30——31.00—Proprietary
27
OpenAI
GPT-4o(2024-11-20)
OpenAI
———31.00—Proprietary
28
Anthropic
Claude 3.5 Sonnet
Anthropic
—————Proprietary
29
DeepMind
Gemini 2.0 Flash Experimental
DeepMind
5.10——21.40—Proprietary
30
Google Deep Mind
Gemini 1.5 Pro
Google Deep Mind
—————Proprietary
31
阿里巴巴
Qwen2.5-Max
阿里巴巴
—————Proprietary
32
DeepMind
Gemini 2.0 Flash-Lite
DeepMind
—————Proprietary
33
Anthropic
Claude3-Opus
Anthropic
—————Proprietary
34
Anthropic
Claude 3.5 Haiku
Anthropic
—————Proprietary
35
OpenAI
GPT-4o mini
OpenAI
—————Proprietary
36
Anthropic
Claude3-Sonnet
Anthropic
—————Proprietary
37
xAI
Grok-1.5
xAI
—————Proprietary
38
Google Deep Mind
Gemini 2.5 Flash
Google Deep Mind
11.00—4.2050.00—Proprietary
39
Anthropic
Claude Mythos Preview
Anthropic
64.70——93.90—Proprietary
40
Google Deep Mind
Gemini 2.5 Flash-Lite
Google Deep Mind
6.90——27.60—Proprietary
41
OpenAI
GPT-5
OpenAI
35.209.9012.5072.8080.00Proprietary
42
OpenAI
GPT-5.4 Pro
OpenAI
58.7083.3038.00——Proprietary
43
Facebook AI研究实验室
Muse Spark
Facebook AI研究实验室
58.0042.5014.6077.40—Proprietary
44
OpenAI
GPT-5.5 Pro
OpenAI
57.2084.6039.60——Proprietary
45
Anthropic
Opus 4.7
Anthropic
54.7075.8022.9087.60—Proprietary
46
Anthropic
Claude Opus 4.6
Anthropic
53.0066.3022.9080.8491.89Proprietary
47
OpenAI
GPT-5.5
OpenAI
52.2085.0035.40——Proprietary
48
OpenAI
GPT-5.4
OpenAI
52.1077.1027.10——Proprietary
49
Google Deep Mind
Gemini 3.1 Pro Preview
Google Deep Mind
51.4077.1016.7080.6090.80Proprietary
50
OpenAI
GPT-5.2 Pro
OpenAI
50.0054.2031.30——Proprietary
OpenAI o1
OpenAI
HLE9.10
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified48.90
τ²-Bench—
Proprietary
Gemini 3.0 Pro (Preview 11-2025)
Google Deep Mind
HLE45.80
ARC-AGI-245.10
FrontierMath - Tier 418.80
SWE-bench Verified76.20
τ²-Bench85.40
Proprietary
Opus 4.5
Anthropic
HLE43.20
ARC-AGI-237.60
FrontierMath - Tier 44.20
SWE-bench Verified80.90
τ²-Bench81.99
Proprietary
4
Qwen 3.6 Plus Preview
阿里巴巴
HLE50.60
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified78.80
τ²-Bench—
Proprietary
5
Claude Sonnet 4.5
Anthropic
HLE33.60
ARC-AGI-213.60
FrontierMath - Tier 44.20
SWE-bench Verified82.00
τ²-Bench84.70
Proprietary
6
Opus 4.1
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 44.20
SWE-bench Verified74.50
τ²-Bench—
Proprietary
7
Hunyuan-T1
腾讯AI实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
8
Grok 4
xAI
HLE38.60
ARC-AGI-215.90
FrontierMath - Tier 42.10
SWE-bench Verified58.60
τ²-Bench—
Proprietary
9
GPT-4.5
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified38.00
τ²-Bench—
Proprietary
10
Gemini 2.5-Pro
Google Deep Mind
HLE21.60
ARC-AGI-24.90
FrontierMath - Tier 42.10
SWE-bench Verified67.20
τ²-Bench—
Proprietary
11
Qwen3-Max-Thinking
阿里巴巴
HLE49.80
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified75.30
τ²-Bench82.10
Proprietary
12
OpenAI o3
OpenAI
HLE20.32
ARC-AGI-26.50
FrontierMath - Tier 42.10
SWE-bench Verified69.10
τ²-Bench—
Proprietary
13
Grok 4.1 Fast
xAI
HLE17.60
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench82.71
Proprietary
14
Claude Opus 4
Anthropic
HLE10.70
ARC-AGI-28.60
FrontierMath - Tier 44.20
SWE-bench Verified72.50
τ²-Bench72.50
Proprietary
15
Claude Sonnet 4
Anthropic
HLE9.60
ARC-AGI-25.90
FrontierMath - Tier 4—
SWE-bench Verified80.20
τ²-Bench52.00
Proprietary
16
Qwen3 Max (Preview)
阿里巴巴
HLE11.10
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified69.60
τ²-Bench74.00
Proprietary
17
OpenAI o4 - mini
OpenAI
HLE17.70
ARC-AGI-2—
FrontierMath - Tier 46.30
SWE-bench Verified68.10
τ²-Bench56.90
Proprietary
18
GPT-4.1
OpenAI
HLE3.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified54.60
τ²-Bench54.70
Proprietary
19
OpenAI o1-mini
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
20
Haiku 4.5
Anthropic
HLE9.70
ARC-AGI-24.50
FrontierMath - Tier 42.10
SWE-bench Verified73.30
τ²-Bench33.00
Proprietary
21
GPT-4o(2025-03-27)
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
22
Gemini 2.0 Pro Experimental
DeepMind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
23
Hunyuan-TurboS
腾讯AI实验室
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
24
GPT-5-mini
OpenAI
HLE5.00
ARC-AGI-2—
FrontierMath - Tier 46.30
SWE-bench Verified—
τ²-Bench—
Proprietary
25
Claude 3.5 Sonnet New
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified49.00
τ²-Bench—
Proprietary
26
GPT-4o
OpenAI
HLE5.30
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified31.00
τ²-Bench—
Proprietary
27
GPT-4o(2024-11-20)
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified31.00
τ²-Bench—
Proprietary
28
Claude 3.5 Sonnet
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
29
Gemini 2.0 Flash Experimental
DeepMind
HLE5.10
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified21.40
τ²-Bench—
Proprietary
30
Gemini 1.5 Pro
Google Deep Mind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
31
Qwen2.5-Max
阿里巴巴
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
32
Gemini 2.0 Flash-Lite
DeepMind
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
33
Claude3-Opus
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
34
Claude 3.5 Haiku
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
35
GPT-4o mini
OpenAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
36
Claude3-Sonnet
Anthropic
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
37
Grok-1.5
xAI
HLE—
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified—
τ²-Bench—
Proprietary
38
Gemini 2.5 Flash
Google Deep Mind
HLE11.00
ARC-AGI-2—
FrontierMath - Tier 44.20
SWE-bench Verified50.00
τ²-Bench—
Proprietary
39
Claude Mythos Preview
Anthropic
HLE64.70
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified93.90
τ²-Bench—
Proprietary
40
Gemini 2.5 Flash-Lite
Google Deep Mind
HLE6.90
ARC-AGI-2—
FrontierMath - Tier 4—
SWE-bench Verified27.60
τ²-Bench—
Proprietary
41
GPT-5
OpenAI
HLE35.20
ARC-AGI-29.90
FrontierMath - Tier 412.50
SWE-bench Verified72.80
τ²-Bench80.00
Proprietary
42
GPT-5.4 Pro
OpenAI
HLE58.70
ARC-AGI-283.30
FrontierMath - Tier 438.00
SWE-bench Verified—
τ²-Bench—
Proprietary
43
Muse Spark
Facebook AI研究实验室
HLE58.00
ARC-AGI-242.50
FrontierMath - Tier 414.60
SWE-bench Verified77.40
τ²-Bench—
Proprietary
44
GPT-5.5 Pro
OpenAI
HLE57.20
ARC-AGI-284.60
FrontierMath - Tier 439.60
SWE-bench Verified—
τ²-Bench—
Proprietary
45
Opus 4.7
Anthropic
HLE54.70
ARC-AGI-275.80
FrontierMath - Tier 422.90
SWE-bench Verified87.60
τ²-Bench—
Proprietary
46
Claude Opus 4.6
Anthropic
HLE53.00
ARC-AGI-266.30
FrontierMath - Tier 422.90
SWE-bench Verified80.84
τ²-Bench91.89
Proprietary
47
GPT-5.5
OpenAI
HLE52.20
ARC-AGI-285.00
FrontierMath - Tier 435.40
SWE-bench Verified—
τ²-Bench—
Proprietary
48
GPT-5.4
OpenAI
HLE52.10
ARC-AGI-277.10
FrontierMath - Tier 427.10
SWE-bench Verified—
τ²-Bench—
Proprietary
49
Gemini 3.1 Pro Preview
Google Deep Mind
HLE51.40
ARC-AGI-277.10
FrontierMath - Tier 416.70
SWE-bench Verified80.60
τ²-Bench90.80
Proprietary
50
GPT-5.2 Pro
OpenAI
HLE50.00
ARC-AGI-254.20
FrontierMath - Tier 431.30
SWE-bench Verified—
τ²-Bench—
Proprietary
Sort by:
Showing 50 of 101 modelsView MMLU Pro benchmark page

Leaderboard FAQ

01

Where does the leaderboard data come from?

Scores are aggregated from primary sources: official model cards, technical reports, papers, vendor blog posts, and reproducible third-party evaluations. Each row links back to the underlying model detail page where the source is cited.

02

Why do scores for the same model differ across benchmarks?

Each benchmark measures a different capability — reasoning (HLE, ARC-AGI-2), math (AIME, FrontierMath), coding (SWE-bench Verified), agent use (τ²-Bench), and so on. A model tuned for one capability may perform very differently on another, which is exactly why we surface per-benchmark scores rather than a single number.

03

How often is the leaderboard updated?

Data is revalidated every 5 minutes, and new models or evaluation results are added as soon as they are published. The "Updated on" indicator at the top of the page reflects the most recent data refresh.

04

How should I read the composite ranking?

The composite view aggregates a model's standing across multiple core benchmarks. It is a useful first filter, but for production decisions you should drill into the specific benchmark closest to your workload — for example, SWE-bench Verified for coding agents, or τ²-Bench for tool-use scenarios.

05

How do I compare an open-source model with a closed API model?

Use the license filter at the top to mix open and closed models in the same view, then look at the same benchmark column for both. Beyond raw scores, consider total cost of ownership: API pricing for closed models vs. self-hosting cost for open weights.