DeepSeek-R1-0528
DeepSeek-R1-0528 is an AI model published by DeepSeek-AI, released on 2025-05-28, for Reasoning model, with 6710.0B parameters, and 64K tokens context length, requiring about 685GB storage, under the MIT License license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
DeepSeek-R1-0528 currently shows benchmark results led by MATH-500 (7 / 44, score 98), Creative Writing (4 / 23, score 86.25), MMLU Pro (23 / 124, score 85). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
DeepSeek-R1-0528 is an AI model published by DeepSeek-AI, released on 2025-05-28, for Reasoning model, with 6710.0B parameters, and 64K tokens context length, requiring about 685GB storage, under the MIT License license.
Follow DataLearner on WeChat for AI model updates and research notes.

| Modality | Input | Output |
|---|---|---|
| Text | $0.55 | $2.19 |
No curated comparisons for this model yet.
Want a custom combination? Open the compare tool