GPT-5.3 Codex
GPT-5.3 Codex is an AI model published by OpenAI, released on 2026-02-05, for Coding model, and 400K tokens context length, with no open-source license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
| Modality | Input | Output |
|---|---|---|
| Text | $1.75 | $14 |
| Modality | Input cache | Output cache |
|---|---|---|
| Text | $0.175 | -- |
GPT-5.3 Codex currently shows benchmark results led by Terminal Bench 2.0 (3 / 43, score 77.30), IC SWE-Lancer(Diamond) (1 / 8, score 81.40), SWE-Bench Pro - Public (8 / 36, score 56.80). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
No curated comparisons for this model yet.
Want a custom combination? Open the compare tool
GPT-5.3 Codex is an AI model published by OpenAI, released on 2026-02-05, for Coding model, and 400K tokens context length, with no open-source license.
Follow DataLearner on WeChat for AI model updates and research notes.
