AI Model Leaderboards
Live rankings across ARC-AGI-2, HLE, AIME 2025, SWE-bench Verified, and more — browse composite scores or drill into math, coding, and agent categories.
As of 2026-05, AA Intelligence Index leaders include GPT-5.5 (xhigh), GPT-5.5 (high), Opus 4.7 (max), based on 10 standardized capability benchmarks.
On the user-preference side, LMArena Text Generation currently ranks Opus 4.7 (thinking), Claude Opus 4.6 (thinking), Claude Opus 4.6 near the top via anonymous A/B voting.
Scroll down for per-benchmark breakdowns in math, coding, and agent categories. See Data Methodology for scoring details, or browse LLM Blogs for in-depth commentary.
Composite Rankings
There is no single, universally agreed-upon comprehensive AI model ranking, so we selected two representative leaderboards that approach the question from different angles. Artificial Analysis Intelligence Index aggregates scores from 10 standardized benchmarks (coding, math, reasoning, etc.) to measure objective capability. LMArena (formerly Chatbot Arena) ranks models by Elo ratings derived from anonymous crowd-sourced A/B voting, reflecting real-world user preference. Together they offer both an objective and a subjective perspective.
AA Intelligence Index
Full rankingComposite of 10 standardized benchmarks across coding, math, science, reasoning, and agentic tasks.
Updated 2026-05-10
LMArena Text Generation
Full rankingElo ratings from anonymous crowdsourced A/B voting, reflecting real user preference for response quality.
Updated 2026-05-07


Per-Benchmark Rankings
Filter by math, coding, agent, and more. Switch benchmarks below or jump into a category leaderboard for the full ranking. View all benchmarks.
LLM Performance Results
Data source: DataLearnerAILeaderboard FAQ
Where does the leaderboard data come from?
Scores are aggregated from primary sources: official model cards, technical reports, papers, vendor blog posts, and reproducible third-party evaluations. Each row links back to the underlying model detail page where the source is cited.
Why do scores for the same model differ across benchmarks?
Each benchmark measures a different capability — reasoning (HLE, ARC-AGI-2), math (AIME, FrontierMath), coding (SWE-bench Verified), agent use (τ²-Bench), and so on. A model tuned for one capability may perform very differently on another, which is exactly why we surface per-benchmark scores rather than a single number.
How often is the leaderboard updated?
Data is revalidated every 5 minutes, and new models or evaluation results are added as soon as they are published. The "Updated on" indicator at the top of the page reflects the most recent data refresh.
How should I read the composite ranking?
The composite view aggregates a model's standing across multiple core benchmarks. It is a useful first filter, but for production decisions you should drill into the specific benchmark closest to your workload — for example, SWE-bench Verified for coding agents, or τ²-Bench for tool-use scenarios.
How do I compare an open-source model with a closed API model?
Use the license filter at the top to mix open and closed models in the same view, then look at the same benchmark column for both. Beyond raw scores, consider total cost of ownership: API pricing for closed models vs. self-hosting cost for open weights.






