DeepSeek-V3.2 (Thinking) vs MiniMax M2.1: Specs & Benchmark Comparison

CharacteristicDeepSeek-V3.2 (Thinking)MiniMax M2.1
CompanyDeepSeekMiniMax
Release DateNovember 30, 2025December 22, 2025
Parameters685B
MultimodalNoNo
Context (input)131K1.0M
Context (output)66K100K
Input Price / 1M$0.28$0.30
Output Price / 1M$0.42$1.20
Average Score0.90.9
Benchmarks
MMLU-Pro0.80.9
GPQA0.80.8

Visual Benchmark Comparison

DeepSeek-V3.2 (Thinking)
MiniMax M2.1
MMLU-Pro0.8 vs 0.9
0.8
0.9
GPQA0.8 vs 0.8
0.8
0.8

Verdict

MiniMax M2.1 leads in 2 out of 4 comparison categories.

Overall Performance

Both models show comparable average scores: DeepSeek-V3.2 (Thinking) — 0.9, MiniMax M2.1 — 0.9.

API Cost

DeepSeek-V3.2 (Thinking) is 2.1x cheaper: input $0.28/1M vs $0.30/1M tokens.

Context Window

MiniMax M2.1 supports a larger context: 1M vs 131K tokens.

Recency

MiniMax M2.1 is newer: released 12/22/2025 vs 11/30/2025.

More About These Models

Related Comparisons

The DeepSeek-V3.2 (Thinking) and MiniMax M2.1 comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the DeepSeek-V3.2 (Thinking) or MiniMax M2.1 page. See also the complete list of AI model comparisons.