GPT-5.4 minivsGPT-5-mini
Across 3 shared benchmarks, GPT-5.4 mini leads overall: GPT-5.4 mini wins 2, GPT-5-mini wins 1, with 0 ties and an average score difference of +41.77.
GPT-5.4 mini
OpenAI · 2026-03-17 · Reasoning model
GPT-5-mini
OpenAI · 2025-08-07 · Foundation model
GPT-5.4 mini2 wins(67%)(33%)1 winGPT-5-mini
Benchmark scores
Grouped by capability, sorted by largest gap within each. 3 shared benchmarks.
General Knowledge
GPT-5.4 mini 2/2| Benchmark | GPT-5.4 mini | GPT-5-mini | Diff |
|---|---|---|---|
| GPQA Diamond | 8829 / 175极高强度思考(无工具) | 0173 / 175 | +88 |
| HLE | 41.5041 / 149极高强度思考(工具) | 0149 / 149 | +41.50 |
Math and Reasoning
GPT-5-mini 1/1| Benchmark | GPT-5.4 mini | GPT-5-mini | Diff |
|---|
Specs
| Field | GPT-5.4 mini | GPT-5-mini |
|---|---|---|
| Publisher | OpenAI | OpenAI |
| Release date | 2026-03-17 | 2025-08-07 |
| Model type | Reasoning model | Foundation model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 400K | 400K |
| Max output | 131072 | 131072 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GPT-5.4 mini | GPT-5-mini |
|---|---|---|
| Text input | $0.75 / 1M tokens | 0.25 美元/100 万tokens |
| Text output | $4.5 / 1M tokens | 2 美元/100 万tokens |
| Cache read | $4.5 / 1M tokens | 0.025 美元/100 万tokens |
| Cache write | $0.075 / 1M tokens | Not public |
Summary
- GPT-5.4 minileads in:General Knowledge (2/2)
- GPT-5-minileads in:Math and Reasoning (1/1)
On average across the 3 shared benchmarks, GPT-5.4 mini scores 41.77 higher.
Largest single-benchmark gap: GPQA Diamond — GPT-5.4 mini 88 vs GPT-5-mini 0 (+88).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.