GPT-5.5vsGemini 3.1 Pro Preview
Across 10 shared benchmarks, GPT-5.5 leads overall: GPT-5.5 wins 6, Gemini 3.1 Pro Preview wins 3, with 1 ties and an average score difference of +5.73.
GPT-5.5
OpenAI · 2026-04-23 · Reasoning model
Across 10 shared benchmarks, GPT-5.5 leads overall: GPT-5.5 wins 6, Gemini 3.1 Pro Preview wins 3, with 1 ties and an average score difference of +5.73.
OpenAI · 2026-04-23 · Reasoning model
Google Deep Mind · 2026-02-20 · Multimodal model
Grouped by capability, sorted by largest gap within each. 10 shared benchmarks.
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| ARC-AGI-2 | 851 / 58极高强度思考(无工具) | 77.107 / 58Thinking High (No Tools) | +7.90 |
| HLE | 52.2010 / 149Thinking High (With Tools) | 51.4012 / 149Thinking High (With Tools) | +0.80 |
| GPQA Diamond | 93.606 / 175Thinking High (No Tools) | 94.303 / 175Thinking High (No Tools) | -0.70 |
| ARC-AGI-3 | 02 / 6Thinking High (No Tools) | 03 / 6Thinking High (No Tools) | — |
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| FrontierMath - Tier 4 | 35.407 / 80Thinking High (With Tools) | 16.7020 / 80Normal (No Tools) | +18.70 |
| FrontierMath | 51.702 / 60Thinking High (With Tools) | 36.9011 / 60Thinking High (No Tools) | +14.80 |
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| τ²-Bench - Telecom | 985 / 35Thinking High (With Tools) | 99.301 / 35Thinking High (With Tools) | -1.30 |
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| BrowseComp | 84.405 / 43Thinking High (With Tools + Internet) | 85.903 / 43Thinking High (With Tools + Internet) | -1.50 |
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| Terminal Bench 2.0 | 82.701 / 43Thinking High (With Tools) | 68.506 / 43Thinking High (With Tools) | +14.20 |
| Benchmark | GPT-5.5 | Gemini 3.1 Pro Preview | Diff |
|---|---|---|---|
| SWE-Bench Pro - Public | 58.603 / 36Thinking High (With Tools) | 54.2017 / 36Thinking High (With Tools) | +4.40 |
| Field | GPT-5.5 | Gemini 3.1 Pro Preview |
|---|---|---|
| Publisher | OpenAI | Google Deep Mind |
| Release date | 2026-04-23 | 2026-02-20 |
| Model type | Reasoning model | Multimodal model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 1000K | 1M |
| Max output | 131072 | 32768 |
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GPT-5.5 | Gemini 3.1 Pro Preview |
|---|---|---|
| Text input | $0.5 / 1M tokens | $2 / 1M tokens |
| Text output | $30 / 1M tokens | $12 / 1M tokens |
| Cache read | $0.5 / 1M tokens | Not public |
| Cache write | $6.25 / 1M tokens | Not public |
On average across the 10 shared benchmarks, GPT-5.5 scores 5.73 higher.
Largest single-benchmark gap: FrontierMath - Tier 4 — GPT-5.5 35.40 vs Gemini 3.1 Pro Preview 16.70 (+18.70).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.