GPT-5vsGemini 2.5-Pro
Across 19 shared benchmarks, GPT-5 leads overall: GPT-5 wins 12, Gemini 2.5-Pro wins 7, with 0 ties and an average score difference of +4.93.
GPT-5
OpenAI · 2025-08-07 · Foundation model
Across 19 shared benchmarks, GPT-5 leads overall: GPT-5 wins 12, Gemini 2.5-Pro wins 7, with 0 ties and an average score difference of +4.93.
OpenAI · 2025-08-07 · Foundation model
Google Deep Mind · 2025-06-05 · Reasoning model
Grouped by capability, sorted by largest gap within each. 19 shared benchmarks.
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| AIME2025 | 61.9080 / 106 | 8843 / 106thinking | -26.10 |
| FrontierMath | 26.3014 / 60Thinking High (With Tools) | 1123 / 60 | +15.30 |
| IMO 2025 | 292 / 9thinking | 15.203 / 9thinking | +13.80 |
| FrontierMath - Tier 4 | 12.5029 / 80Thinking High (No Tools) | 2.1056 / 80Normal (No Tools) | +10.40 |
| IMO 2024 | 114 / 10thinking | 192 / 10thinking | -8 |
| Simple Bench | 56.708 / 27high | 62.402 / 27thinking | -5.70 |
| IMO-ProofBench | 592 / 16thinking | 55.203 / 16thinking | +3.80 |
| IMO-ProofBench Advanced | 202 / 8thinking | 17.604 / 8thinking | +2.40 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| ARC-AGI | 661 / 65 | 3747 / 65thinking | -31 |
| HLE | 6.30138 / 149 | 21.6089 / 149thinking | -15.30 |
| GPQA Diamond | 77.8081 / 175 | 86.4038 / 175thinking | -8.60 |
| LiveBench | 79.331 / 52high |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| τ²-Bench - Telecom | 96.7011 / 35Thinking High (With Tools) | 5432 / 35thinking + 使用工具 | +42.70 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| BrowseComp | 54.9030 / 43thinking + 使用工具 | 7.8042 / 43thinking + 使用工具 | +47.10 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| Terminal-Bench | 43.808 / 35thinking + 使用工具 | 25.3028 / 35thinking | +18.50 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| SWE-bench Verified | 72.8041 / 103high | 67.2063 / 103thinking | +5.60 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| IF Bench | 73.106 / 27high | 4926 / 27thinking + 使用工具 | +24.10 |
| Benchmark | GPT-5 | Gemini 2.5-Pro | Diff |
|---|---|---|---|
| MMMU | 84.205 / 28high | 829 / 28thinking | +2.20 |
| Field | GPT-5 | Gemini 2.5-Pro |
|---|---|---|
| Publisher | OpenAI | Google Deep Mind |
| Release date | 2025-08-07 | 2025-06-05 |
| Model type | Foundation model | Reasoning model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 400K | 1000K |
| Max output | 131072 | 65536 |
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GPT-5 | Gemini 2.5-Pro |
|---|---|---|
| Text input | 1.25 美元/100 万tokens | 1.25 美元/100 万tokens |
| Text output | 10 美元/100 万tokens | 10 美元/100 万tokens |
| Cache read | Not public | 0.125 美元/100 万tokens |
On average across the 19 shared benchmarks, GPT-5 scores 4.93 higher.
Largest single-benchmark gap: BrowseComp — GPT-5 54.90 vs Gemini 2.5-Pro 7.80 (+47.10).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.
| +7.41 |
| ARC-AGI-2 | 056 / 58 | 4.9043 / 58thinking | -4.90 |