GPT-5.4 minivsHaiku 4.5
Across 5 shared benchmarks, GPT-5.4 mini leads overall: GPT-5.4 mini wins 3, Haiku 4.5 wins 1, with 1 ties and an average score difference of +13.11.
GPT-5.4 mini
OpenAI · 2026-03-17 · Reasoning model
Haiku 4.5
Anthropic · 2025-10-15 · Multimodal model
GPT-5.4 mini3 wins(60%)Ties1(20%)1 winHaiku 4.5
Benchmark scores
Grouped by capability, sorted by largest gap within each. 5 shared benchmarks.
General Knowledge
GPT-5.4 mini 2/2| Benchmark | GPT-5.4 mini | Haiku 4.5 | Diff |
|---|---|---|---|
| HLE | 41.5041 / 149极高强度思考(工具) | 4.30147 / 149Normal (No Tools) | +37.20 |
| GPQA Diamond | 8829 / 175极高强度思考(无工具) | 60.50135 / 175Normal (No Tools) | +27.50 |
Claw-style Agent Evaluation
Haiku 4.5 1/1| Benchmark |
|---|
Specs
| Field | GPT-5.4 mini | Haiku 4.5 |
|---|---|---|
| Publisher | OpenAI | Anthropic |
| Release date | 2026-03-17 | 2025-10-15 |
| Model type | Reasoning model | Multimodal model |
| Architecture | Dense | Dense |
| Parameters | 0.0 | 0.0 |
| Context length | 400K | 200K |
| Max output | 131072 | 65536 |
API pricing
Prices use DataLearner records when available; missing fields are not inferred.
| Item | GPT-5.4 mini | Haiku 4.5 |
|---|---|---|
| Text input | $0.75 / 1M tokens | 1 美元 / 100万 tokens |
| Text output | $4.5 / 1M tokens | 5 美元 / 100万 tokens |
| Cache read | $4.5 / 1M tokens | 1.25 美元 / 100万 tokens |
| Cache write | $0.075 / 1M tokens | 0.10 美元 / 100万 tokens |
Summary
- GPT-5.4 minileads in:General Knowledge (2/2), Coding and Software Engineer (1/1)
- Haiku 4.5leads in:Claw-style Agent Evaluation (1/1)
- Tied in:Math and Reasoning
On average across the 5 shared benchmarks, GPT-5.4 mini scores 13.11 higher.
Largest single-benchmark gap: HLE — GPT-5.4 mini 41.50 vs Haiku 4.5 4.30 (+37.20).
Page generated from structured model, pricing and benchmark records. No real-time LLM is used to write the prose.