Qwen3-235B-A22B-Thinking-2507
Qwen3-235B-A22B-Thinking-2507 is an advanced thinking-mode model on the Mixture-of-Experts architecture with 235B total parameters (22B active). Features 94 layers, 128 experts (8 active), and supports a native context length of 262K. This version is significantly improved in reasoning, achieving leading results among open models with thinking mode in logic, math, science, coding, and academic benchmarks.
Key Specifications
Timeline
Technical Specifications
Pricing & Availability
Benchmark Results
Model performance metrics across various tests and benchmarks
Other Tests
License & Metadata
Compare Qwen3-235B-A22B-Thinking-2507
All comparisonsSimilar Models
All ModelsQwen3.5 122B A10B
Alibaba
Qwen3 235B A22B
Alibaba
Qwen3-235B-A22B-Instruct-2507
Alibaba
Qwen3-Coder 480B A35B Instruct
Alibaba
Qwen2 7B Instruct
Alibaba
Qwen2.5-Coder 7B Instruct
Alibaba
Qwen3.5 35B A3B
Alibaba
Qwen3.5 9B
Alibaba
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.