DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Back to Main Leaderboard

LLM Coding Benchmark Leaderboard

This page provides the LLM coding benchmark leaderboard, covering SWE-Bench Verified, SWE-Bench Pro, LiveCodeBench, and SWE-bench Multilingual datasets, comparing GPT, Claude, Qwen, and DeepSeek models.

Updated on 2026-05-02 07:10:24

As of 2026-05, this page covers SWE-bench Verified, LiveCodeBench, SWE-Bench Pro - Public, SWE-bench Multilingual and related benchmarks for LLM Coding Benchmark Leaderboard, making it straightforward to compare within the same task family.

Click any model name to check context length, licensing, and pricing on its detail page. See Data Methodology for scoring details.

Reference: Composite Coding Rankings

There is no single, universally accepted coding leaderboard. Static benchmarks like SWE-bench and HumanEval measure specific skills but can be gamed through targeted fine-tuning. We selected two complementary human-preference leaderboards: LMArena Coding Arena ranks models on general programming tasks (debugging, algorithms, code generation) via anonymous crowd-sourced voting; DesignArena Code Category focuses specifically on visual, front-end code generation (websites, UI components, games) using the same blind-voting methodology. Reading both together gives a fuller picture of coding capability.

LMArena Coding Arena

Full ranking

Elo ratings from anonymous A/B voting on real general coding tasks (debugging, algorithms, code generation) submitted by developers.

Updated 2026-05-07

#ModelElo
1
Anthropic
Opus 4.7 (thinking)Anthropic
1569
2
Anthropic
Claude Opus 4.6 (thinking)Anthropic
1553
3
Anthropic
Opus 4.7Anthropic
1550
4
Anthropic
Claude Opus 4.6Anthropic
1550
5
Anthropic
Claude Opus 4 (thinking-32k)Anthropic
1531
6
F
Muse SparkFacebook AI研究实验室
1530
7
Google Deep Mind
Gemini 3.1 Pro PreviewGoogle Deep Mind
1529
8
OpenAI
gpt-5.4-highOpenAI
1528
9
智
GLM 5.1智谱AI
1525
10
OpenAI
gpt-5.5-highOpenAI
1524
Benchmark
SWE-bench VerifiedLiveCodeBenchSWE-Bench Pro - PublicSWE-bench Multilingual
More Benchmarks
Model Size:All3B and below7B13B

LLM Performance Results

Data source: DataLearnerAI
No chart data available
RankModelLicense
阿里巴巴
Qwen3-Coder-Next
阿里巴巴
70.60—44.30—Free commercial
华为
Pangu Embedded
华为
—67.10——Free commercial
阿里巴巴
Qwen3-8B
阿里巴巴
—61.80——Free commercial
4
Tencent ARC
Hunyuan-7B
Tencent ARC
—57.00——Free commercial
5
阿里巴巴
Qwen3-4B-Thinking-2507
阿里巴巴
—55.20——Free commercial
6
智谱AI
GLM-4-9B-Chat
智谱AI
—51.80——Free commercial
7
阿里巴巴
Qwen3-4B-2507
阿里巴巴
—35.10——Free commercial
Qwen3-Coder-Next
阿里巴巴
SWE-bench Verified70.60
LiveCodeBench—
SWE-Bench Pro - Public44.30
SWE-bench Multilingual—
Free commercial
Pangu Embedded
华为
SWE-bench Verified—
LiveCodeBench67.10
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
Qwen3-8B
阿里巴巴
SWE-bench Verified—
LiveCodeBench61.80
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
4
Hunyuan-7B
Tencent ARC
SWE-bench Verified—
LiveCodeBench57.00
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
5
Qwen3-4B-Thinking-2507
阿里巴巴
SWE-bench Verified—
LiveCodeBench55.20
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
6
GLM-4-9B-Chat
智谱AI
SWE-bench Verified—
LiveCodeBench51.80
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
7
Qwen3-4B-2507
阿里巴巴
SWE-bench Verified—
LiveCodeBench35.10
SWE-Bench Pro - Public—
SWE-bench Multilingual—
Free commercial
Sort by:
Source: LMArena

DesignArena Code Category

Full ranking

Elo ratings from anonymous voting on visual front-end code tasks (websites, UI components, games, data viz) by Arcada Labs.

Updated 2026-05-10

#ModelElo
1
Anthropic
Claude Opus 4.7 (Thinking)Anthropic
1350
2
Anthropic
Claude Opus 4.6Anthropic
1346
3
Anthropic
Claude Opus 4.6 (Thinking)Anthropic
1344
4
Moonshot AI
Kimi K2.6Moonshot AI
1343
5
Z
GLM 5.1Zhipu AI
1341
6
Anthropic
Opus 4.7Anthropic
1338
7
Z
GLM 5 TurboZhipu AI
1336
8
Anthropic
Claude Sonnet 4.6Anthropic
1331
9
OpenAI
GPT-5.5OpenAI
1314
10
DeepSeek-AI
DeepSeek-V4-ProDeepSeek-AI
1313
Source: DesignArena
34B
65B
100B and above
Model Type:AllReasoning ModelsFoundation ModelsInstruction/Chat ModelsCoding Models
Source:AllOpen SourceClosed Source
Origin:AllChina
Model release cutoff: