DeepSeek-V3.2 (Thinking) vs GLM-4.7-Flash: Specs & Benchmark Comparison

CharacteristicDeepSeek-V3.2 (Thinking)GLM-4.7-Flash
CompanyDeepSeekZhipu AI
Release DateNovember 30, 2025January 18, 2026
Parameters685B30B
MultimodalNoNo
Context (input)131K128K
Context (output)66K16K
Input Price / 1M$0.28$0.07
Output Price / 1M$0.42$0.40
Average Score0.90.6
Benchmarks
GPQA0.80.8
AIME 20250.90.9

Visual Benchmark Comparison

DeepSeek-V3.2 (Thinking)
GLM-4.7-Flash
GPQA0.8 vs 0.8
0.8
0.8
AIME 20250.9 vs 0.9
0.9
0.9

Verdict

GLM-4.7-Flash leads in 2 out of 4 comparison categories.

Overall Performance

Both models show comparable average scores: DeepSeek-V3.2 (Thinking) — 0.9, GLM-4.7-Flash — 0.6.

API Cost

GLM-4.7-Flash is 1.5x cheaper: input $0.07/1M vs $0.28/1M tokens.

Context Window

DeepSeek-V3.2 (Thinking) supports a larger context: 131K vs 128K tokens.

Recency

GLM-4.7-Flash is newer: released 1/18/2026 vs 11/30/2025.

More About These Models

Related Comparisons

The DeepSeek-V3.2 (Thinking) and GLM-4.7-Flash comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the DeepSeek-V3.2 (Thinking) or GLM-4.7-Flash page. See also the complete list of AI model comparisons.