Claude 3.5 Sonnet vs GLM-4.5: Specs & Benchmark Comparison

CharacteristicClaude 3.5 SonnetGLM-4.5
CompanyAnthropicZhipu AI
Release DateJune 21, 2024July 27, 2025
Parameters355B
MultimodalYesNo
Context (input)200K131K
Context (output)200K98K
Input Price / 1M$3.00$0.60
Output Price / 1M$15.00$2.20
Average Score0.80.8
Benchmarks
GPQA0.60.8
MMLU-Pro0.80.8

Visual Benchmark Comparison

Claude 3.5 Sonnet
GLM-4.5
GPQA0.6 vs 0.8
0.6
0.8
MMLU-Pro0.8 vs 0.8
0.8
0.8

Verdict

GLM-4.5 leads in 2 out of 4 comparison categories.

Overall Performance

Both models show comparable average scores: Claude 3.5 Sonnet — 0.8, GLM-4.5 — 0.8.

API Cost

GLM-4.5 is 6.4x cheaper: input $0.60/1M vs $3.00/1M tokens.

Context Window

Claude 3.5 Sonnet supports a larger context: 200K vs 131K tokens.

Recency

GLM-4.5 is newer: released 7/27/2025 vs 6/21/2024.

More About These Models

Related Comparisons

Frequently Asked Questions

Which is better for coding — Claude 3.5 Sonnet or GLM-4.5?
Direct comparison on the SWE-Bench benchmark is not available. We recommend reviewing other metrics on the comparison page.
Which model is cheaper — Claude 3.5 Sonnet or GLM-4.5?
GLM-4.5 is cheaper for input: $0.60 per 1M tokens vs $3.00.
Which has a larger context window — Claude 3.5 Sonnet or GLM-4.5?
Claude 3.5 Sonnet supports a larger context: 200,000 tokens vs 131,072.

The Claude 3.5 Sonnet and GLM-4.5 comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the Claude 3.5 Sonnet or GLM-4.5 page. See also the complete list of AI model comparisons.