DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
Page navigation
Model catalogGPT-5.3 Codex
GP

GPT-5.3 Codex

Coding model

GPT-5.3 Codex

Release date: 2026-02-05Updated: 2026-03-08 21:06:201,942
Live demoGitHubHugging FaceCompare
Parameters
Not disclosed
Context length
400K
Chinese support
Supported
Reasoning ability

GPT-5.3 Codex is an AI model published by OpenAI, released on 2026-02-05, for Coding model, and 400K tokens context length, with no open-source license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

GPT-5.3 Codex

Model basics

Reasoning traces
Supported
Thinking modes
Thinking Level · Medium (Default)Thinking Level · LowThinking Level · HighThinking Level · Extra-High
Context length
400K tokens
Max output length
128000 tokens
Model type
Coding model
Release date
2026-02-05
Model file size
No data
MoE architecture
No
Total params / Active params
No data / N/A
Knowledge cutoff
No data
GPT-5.3 Codex

Open source & experience

Code license
Not open source
Weights license
Not open source
GitHub repo
GitHub link unavailable
Hugging Face
Hugging Face link unavailable
Live demo
https://chatgpt.com
GPT-5.3 Codex

Official resources

Paper
Introducing GPT-5.3-Codex
DataLearnerAI blog
No blog post yet
GPT-5.3 Codex

API details

API speed
4/5
💡Default unit: $/1M tokens. If vendors use other units, follow their published pricing.
Standard pricingStandard
ModalityInputOutput
Text$1.75$14
Cached pricingCache
ModalityInput cacheOutput cache
Text$0.175--
GPT-5.3 Codex

Benchmark Results

GPT-5.3 Codex currently shows benchmark results led by Terminal Bench 2.0 (3 / 43, score 77.30), IC SWE-Lancer(Diamond) (1 / 8, score 81.40), SWE-Bench Pro - Public (8 / 36, score 56.80). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking

Coding and Software Engineer

2 evaluations
Benchmark / mode
Score
Rank/total
IC SWE-Lancer(Diamond)
Thinking Mode
81.40
1 / 8
SWE-Bench Pro - Public
Thinking Mode
56.80
8 / 36

AI Agent - Tool Usage

1 evaluations
Benchmark / mode
Score
Rank/total
Terminal Bench 2.0
77.30
3 / 43
View benchmark analysisCompare with other models

Compare with other models

No curated comparisons for this model yet.

Want a custom combination? Open the compare tool

GPT-5.3 Codex

Publisher

OpenAI
OpenAI
View publisher details
GPT-5.3 Codex

Model Overview

GPT-5.3 Codex is an AI model published by OpenAI, released on 2026-02-05, for Coding model, and 400K tokens context length, with no open-source license.

DataLearner on WeChat

Follow DataLearner on WeChat for AI model updates and research notes.

DataLearner WeChat QR code