Grouped by capability, sorted by largest gap within each. 11 shared benchmarks.
| Benchmark | Opus 4.7 | GPT-5.4 | Diff |
|---|---|---|---|
| HLE | 54.706 / 149Extended (with tools) | 52.1011 / 149极高强度思考(工具) | +2.60 |
| GPQA Diamond | 94.204 / 175Extended (no tools) | 92.809 / 175极高强度思考(无工具) | +1.40 |
| ARC-AGI-2 | 75.809 / 58最高(无工具) | 77.107 / 58Normal (No Tools) | -1.30 |
| ARC-AGI | 93.509 / 65Thinking High (No Tools) | 93.707 / 65Normal (No Tools) | -0.20 |
| ARC-AGI-3 | 05 / 6Thinking High (No Tools) | 04 / 6Thinking High (No Tools) | — |
| Benchmark | Opus 4.7 | GPT-5.4 | Diff |
|---|---|---|---|
| Terminal Bench 2.0 | 69.405 / 43Extended (with tools) | 75.104 / 43极高强度思考(工具) | -5.70 |
| OSWorld-Verified | 783 / 14Extended (with tools) | 754 / 14极高强度思考(工具) | +3 |
| Benchmark | Opus 4.7 | GPT-5.4 | Diff |
|---|---|---|---|
| FrontierMath - Tier 4 | 22.9012 / 80极高强度思考(无工具) | 27.1011 / 80极高强度思考(无工具) | -4.20 |
| FrontierMath | 43.806 / 60极高强度思考(无工具) | 47.605 / 60极高强度思考(无工具) | -3.80 |
| Benchmark | Opus 4.7 | GPT-5.4 | Diff |
|---|---|---|---|
| BrowseComp | 79.3011 / 43Extended (with tools) | 82.709 / 43极高强度思考(工具) | -3.40 |
| Benchmark | Opus 4.7 | GPT-5.4 | Diff |
|---|---|---|---|
| SWE-Bench Pro - Public | 64.302 / 36Extended (with tools) | 57.706 / 36极高强度思考(无工具) | +6.60 |
| Field | Opus 4.7 | GPT-5.4 |
|---|---|---|
| Publisher | Anthropic | OpenAI |
| Release date | 2026-04-16 | 2026-03-05 |
| Model type | Reasoning model | Multimodal model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 1000K | 1M |
| Max output | 131072 | 128000 |
Prices use DataLearner records when available; missing fields are not inferred.
| Item | Opus 4.7 | GPT-5.4 |
|---|---|---|
| Text input | $5 / 1M tokens | $2.5 / 1M tokens |
| Text output | $25 / 1M tokens | $15 / 1M tokens |
| Cache read | $0.5 / 1M tokens | Not public |
| Cache write | $6.25 / 1M tokens | $0.25 / 1M tokens |
On average across the 11 shared benchmarks, GPT-5.4 scores 0.45 higher.
Largest single-benchmark gap: SWE-Bench Pro - Public — Opus 4.7 64.30 vs GPT-5.4 57.70 (+6.60).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.