Grouped by capability, sorted by largest gap within each. 11 shared benchmarks.
| Benchmark | GPT-5.5 | Opus 4.7 | Diff |
|---|---|---|---|
| ARC-AGI-2 | 851 / 58极高强度思考(无工具) | 75.809 / 58最高(无工具) | +9.20 |
| HLE | 52.2010 / 149Thinking High (With Tools) | 54.706 / 149Extended (with tools) | -2.50 |
| ARC-AGI | 953 / 65极高强度思考(无工具) | 93.509 / 65Thinking High (No Tools) | +1.50 |
| GPQA Diamond | 93.606 / 175Thinking High (No Tools) | 94.204 / 175Extended (no tools) | -0.60 |
| ARC-AGI-3 | 02 / 6Thinking High (No Tools) | 05 / 6Thinking High (No Tools) | — |
| Benchmark | GPT-5.5 | Opus 4.7 | Diff |
|---|---|---|---|
| Terminal Bench 2.0 | 82.701 / 43Thinking High (With Tools) | 69.405 / 43Extended (with tools) | +13.30 |
| OSWorld-Verified | 78.702 / 14Thinking High (With Tools) | 783 / 14Extended (with tools) | +0.70 |
| Benchmark | GPT-5.5 | Opus 4.7 | Diff |
|---|---|---|---|
| FrontierMath - Tier 4 | 35.407 / 80Thinking High (With Tools) | 22.9012 / 80极高强度思考(无工具) | +12.50 |
| FrontierMath | 51.702 / 60Thinking High (With Tools) | 43.806 / 60极高强度思考(无工具) | +7.90 |
| Benchmark | GPT-5.5 | Opus 4.7 | Diff |
|---|---|---|---|
| BrowseComp | 84.405 / 43Thinking High (With Tools + Internet) | 79.3011 / 43Extended (with tools) | +5.10 |
| Benchmark | GPT-5.5 | Opus 4.7 | Diff |
|---|---|---|---|
| SWE-Bench Pro - Public | 58.603 / 36Thinking High (With Tools) | 64.302 / 36Extended (with tools) | -5.70 |
| Field | GPT-5.5 | Opus 4.7 |
|---|---|---|
| Publisher | OpenAI | Anthropic |
| Release date | 2026-04-23 | 2026-04-16 |
| Model type | Reasoning model | Reasoning model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 1000K | 1000K |
| Max output | 131072 | 131072 |
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GPT-5.5 | Opus 4.7 |
|---|---|---|
| Text input | $0.5 / 1M tokens | $5 / 1M tokens |
| Text output | $30 / 1M tokens | $25 / 1M tokens |
| Cache read | $0.5 / 1M tokens | $0.5 / 1M tokens |
| Cache write | $6.25 / 1M tokens | $6.25 / 1M tokens |
On average across the 11 shared benchmarks, GPT-5.5 scores 3.76 higher.
Largest single-benchmark gap: Terminal Bench 2.0 — GPT-5.5 82.70 vs Opus 4.7 69.40 (+13.30).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.