Gemini 2.0 Pro ExperimentalvsGPT-4o(2024-11-20)
Across 4 shared benchmarks, Gemini 2.0 Pro Experimental leads overall: Gemini 2.0 Pro Experimental wins 4, GPT-4o(2024-11-20) wins 0, with 0 ties and an average score difference of +7.70.
Gemini 2.0 Pro Experimental
DeepMind · 2025-02-05 · AI model
GPT-4o(2024-11-20)
OpenAI · 2024-11-20 · AI model
Gemini 2.0 Pro Experimental4 wins(100%)(0%)0 winsGPT-4o(2024-11-20)
Benchmark scores
Grouped by capability, sorted by largest gap within each. 4 shared benchmarks.
General Knowledge
Gemini 2.0 Pro Experimental 2/2| Benchmark | Gemini 2.0 Pro Experimental | GPT-4o(2024-11-20) | Diff |
|---|---|---|---|
| MMLU Pro | 79.1060 / 124 | 77.9070 / 124 | +1.20 |
| MMLU | 86.5028 / 65 | 85.7037 / 65 | +0.80 |
Common Sense
Gemini 2.0 Pro Experimental 1/1| Benchmark | Gemini 2.0 Pro Experimental | GPT-4o(2024-11-20) | Diff |
|---|---|---|---|
| SimpleQA |
Specs
| Field | Gemini 2.0 Pro Experimental | GPT-4o(2024-11-20) |
|---|---|---|
| Publisher | DeepMind | OpenAI |
| Release date | 2025-02-05 | 2024-11-20 |
| Model type | AI model | AI model |
| Architecture | Dense | Dense |
| Parameters | Not available | Not available |
| Context length | 2000K | 128K |
| Max output | 8192 | Not available |
Summary
- Gemini 2.0 Pro Experimentalleads in:General Knowledge (2/2), Common Sense (1/1), Math and Reasoning (1/1)
On average across the 4 shared benchmarks, Gemini 2.0 Pro Experimental scores 7.70 higher.
Largest single-benchmark gap: MATH — Gemini 2.0 Pro Experimental 91.80 vs GPT-4o(2024-11-20) 68.50 (+23.30).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.