Key Specifications
Parameters
-
Context
-
Release Date
December 17, 2024
Average Score
82.5%
Timeline
Key dates in the model's history
Announcement
December 17, 2024
Last Update
July 19, 2025
Today
March 25, 2026
Technical Specifications
Parameters
-
Training Tokens
-
Knowledge Cutoff
September 30, 2023
Family
-
Capabilities
MultimodalZeroEval
Benchmark Results
Model performance metrics across various tests and benchmarks
Reasoning
Logical reasoning and analysis
GPQA
Diamond, accuracy Pass@1 AI: Diamond, accuracy Pass@1 • Self-reported
Other Tests
Specialized benchmarks
AIME 2024
Pass@1 accuracy This metric, used for evaluation models in mathematical tasks. She/It relates to to probability that, that model will solve task with first attempts. In each attempt model is provided task, and she/it generates solution. Then solution automatically is evaluated with using which verifies, is whether answer correct • Self-reported
License & Metadata
License
proprietary
Announcement Date
December 17, 2024
Last Updated
July 19, 2025
Similar Models
All Modelso3
OpenAI
MM
Best score:0.8 (GPQA)
Released:Apr 2025
Price:$2.00/1M tokens
GPT-4.5
OpenAI
MM
Best score:0.9 (MMLU)
Released:Feb 2025
Price:$75.00/1M tokens
GPT-5 nano
OpenAI
MM
Best score:0.7 (GPQA)
Released:Aug 2025
Price:$0.05/1M tokens
GPT-4
OpenAI
MM
Best score:1.0 (ARC)
Released:Jun 2023
Price:$30.00/1M tokens
GPT-4o
OpenAI
MM
Best score:0.9 (HumanEval)
Released:May 2024
Price:$2.50/1M tokens
GPT-5 mini
OpenAI
MM
Best score:0.8 (GPQA)
Released:Aug 2025
Price:$0.25/1M tokens
GPT-5 High
OpenAI
MM
Best score:0.9 (GPQA)
Released:Aug 2025
Price:$2.00/1M tokens
GPT-5
OpenAI
MM
Best score:0.9 (HumanEval)
Released:Aug 2025
Price:$1.25/1M tokens
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.