GLM-4.7-Flash vs GPT-5.2 Codex: Specs & Benchmark Comparison
| Characteristic | GLM-4.7-Flash | GPT-5.2 Codex |
|---|---|---|
| Company | Zhipu AI | OpenAI |
| Release Date | January 18, 2026 | January 13, 2026 |
| Parameters | 30B | — |
| Multimodal | No | Yes |
| Context (input) | 128K | 400K |
| Context (output) | 16K | 128K |
| Input Price / 1M | $0.07 | $1.75 |
| Output Price / 1M | $0.40 | $14.00 |
| Average Score | 0.6 | 0.6 |
Verdict
Both models show equal results — the choice depends on your specific use case.
Overall Performance
Both models show comparable average scores: GLM-4.7-Flash — 0.6, GPT-5.2 Codex — 0.6.
API Cost
GLM-4.7-Flash is 33.5x cheaper: input $0.07/1M vs $1.75/1M tokens.
Context Window
GPT-5.2 Codex supports a larger context: 400K vs 128K tokens.
Recency
Both models were released around the same time: 1/18/2026 and 1/13/2026.
More About These Models
Related Comparisons
The GLM-4.7-Flash and GPT-5.2 Codex comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the GLM-4.7-Flash or GPT-5.2 Codex page. See also the complete list of AI model comparisons.