Moonshot AI logo

Kimi K2.5

Multimodal
Moonshot AI

Kimi K2.5 is the latest Mixture-of-Experts (MoE) language model from Moonshot AI, featuring 1 trillion total parameters with 32 billion active parameters per forward pass. Built on the Kimi K2 architecture, it offers significant improvements in reasoning, coding, agentic capabilities, and multimodal understanding. It supports long context up to 256K tokens.

Key Specifications

Parameters
1.0T
Context
-
Release Date
January 26, 2026
Average Score
91.2%

Timeline

Key dates in the model's history
Announcement
January 26, 2026
Last Update
January 29, 2026
Today
March 25, 2026

Technical Specifications

Parameters
1.0T
Training Tokens
-
Knowledge Cutoff
-
Family
-
Capabilities
MultimodalZeroEval

Benchmark Results

Model performance metrics across various tests and benchmarks

Reasoning

Logical reasoning and analysis
GPQA
AccuracySelf-reported
87.6%

Other Tests

Specialized benchmarks
AIME 2025
Standard evaluationSelf-reported
96.0%
HMMT 2025
Standard evaluationSelf-reported
95.0%
InfoVQAtest
ImagesSelf-reported
93.0%
OCRBench
ImagesSelf-reported
92.0%
MathVista-Mini
ImagesSelf-reported
90.0%
OmniDocBench 1.5
ImagesSelf-reported
89.0%
MMLU-Pro
AccuracySelf-reported
87.1%

License & Metadata

License
mit
Announcement Date
January 26, 2026
Last Updated
January 29, 2026

Compare Kimi K2.5

All comparisons

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.