DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
Page navigation
Model catalogQwen3-235B-A22B-Thinking
QW

Qwen3-235B-A22B-Thinking

Reasoning model

Qwen3-235B-A22B-Thinking-2507

Release date: 2025-07-30Updated: 2025-08-11 11:16:181,027
Live demoGitHubHugging FaceCompare
Parameters
30.5B
Context length
256K
Chinese support
Supported
Reasoning ability

Qwen3-235B-A22B-Thinking-2507 is an AI model published by 阿里巴巴, released on 2025-07-30, for Reasoning model, with 305.0B parameters, and 256K tokens context length, requiring about 31.17GB storage, under the Apache 2.0 license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Qwen3-235B-A22B-Thinking

Model basics

Reasoning traces
Supported
Thinking modes
Thinking modes not supported
Context length
256K tokens
Max output length
16384 tokens
Model type
Reasoning model
Release date
2025-07-30
Model file size
31.17GB
MoE architecture
Yes
Total params / Active params
30.5B / 3.3B
Knowledge cutoff
No data
Qwen3-235B-A22B-Thinking

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
https://github.com/QwenLM/Qwen3
Hugging Face
https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507
Live demo
https://chat.qwen.ai/
Qwen3-235B-A22B-Thinking

Official resources

Paper
Qwen3: Think Deeper, Act Faster
DataLearnerAI blog
No blog post yet
Qwen3-235B-A22B-Thinking

API details

API speed
3/5
💡Default unit: $/1M tokens. If vendors use other units, follow their published pricing.
Standard pricingStandard
ModalityInputOutput
Text$0.2$2.4
Qwen3-235B-A22B-Thinking

Benchmark Results

Qwen3-235B-A22B-Thinking currently shows benchmark results led by Creative Writing (5 / 23, score 86.10), MMLU Pro (32 / 124, score 84.40), AIME2025 (33 / 106, score 92.30). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking

General Knowledge

4 evaluations
Benchmark / mode
Score
Rank/total
MMLU Pro
Thinking Mode
84.40
32 / 124
GPQA Diamond
Thinking Mode
81.10
64 / 175
LiveBench
Thinking Mode
63.42
39 / 52
HLE
Thinking Mode
18.20
101 / 149

Coding and Software Engineer

1 evaluations
Benchmark / mode
Score
Rank/total
LiveCodeBench
Thinking Mode
74.10
39 / 118

Math and Reasoning

3 evaluations
Benchmark / mode
Score
Rank/total
AIME2025
Thinking Mode
92.30
33 / 106
IMO-ProofBench
Thinking Mode
33.30
6 / 16
IMO-ProofBench Advanced
Thinking Mode
5.20
5 / 8

Writing and Creative Capabilities

1 evaluations
Benchmark / mode
Score
Rank/total
Creative Writing
Thinking Mode
86.10
5 / 23
View benchmark analysisCompare with other models
Qwen3-235B-A22B-Thinking

Publisher

阿里巴巴
阿里巴巴
View publisher details
Qwen3-235B-A22B-Thinking-2507

Model Overview

Qwen3-235B-A22B-Thinking-2507 is an AI model published by 阿里巴巴, released on 2025-07-30, for Reasoning model, with 305.0B parameters, and 256K tokens context length, requiring about 31.17GB storage, under the Apache 2.0 license.

DataLearner on WeChat

Follow DataLearner on WeChat for AI model updates and research notes.

DataLearner WeChat QR code

Compare with other models

No curated comparisons for this model yet.

Want a custom combination? Open the compare tool