GLM-4.7-Flash vs GPT-5.2 Codex: Specs & Benchmark Comparison

CharacteristicGLM-4.7-FlashGPT-5.2 Codex
CompanyZhipu AIOpenAI
Release DateJanuary 18, 2026January 13, 2026
Parameters30B
MultimodalNoYes
Context (input)128K400K
Context (output)16K128K
Input Price / 1M$0.07$1.75
Output Price / 1M$0.40$14.00
Average Score0.60.6

Verdict

Both models show equal results — the choice depends on your specific use case.

Overall Performance

Both models show comparable average scores: GLM-4.7-Flash — 0.6, GPT-5.2 Codex — 0.6.

API Cost

GLM-4.7-Flash is 33.5x cheaper: input $0.07/1M vs $1.75/1M tokens.

Context Window

GPT-5.2 Codex supports a larger context: 400K vs 128K tokens.

Recency

Both models were released around the same time: 1/18/2026 and 1/13/2026.

More About These Models

Related Comparisons

The GLM-4.7-Flash and GPT-5.2 Codex comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the GLM-4.7-Flash or GPT-5.2 Codex page. See also the complete list of AI model comparisons.