Google logo

Gemini 1.0 Pro

Google

Gemini 1.0 Pro is a natural language processing (NLP) model designed for tasks such as multi-turn text and code chat, as well as code generation. It supports text input and output, making it ideal for natural language tasks. The model is optimized for processing complex dialogues and generating code snippets. It offers configurable safety settings and supports function calling but does not support JSON mode, JSON schema, or system instructions. The latest stable version is gemini-1.0-pro-001, with the last update released in February 2024.

Key Specifications

Parameters
-
Context
32.8K
Release Date
February 15, 2024
Average Score
48.4%

Timeline

Key dates in the model's history
Announcement
February 15, 2024
Last Update
July 19, 2025
Today
March 25, 2026

Technical Specifications

Parameters
-
Training Tokens
-
Knowledge Cutoff
February 1, 2024
Family
-
Capabilities
MultimodalZeroEval

Pricing & Availability

Input (per 1M tokens)
$0.50
Output (per 1M tokens)
$1.50
Max Input Tokens
32.8K
Max Output Tokens
8.2K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning

Benchmark Results

Model performance metrics across various tests and benchmarks

General Knowledge

Tests on general knowledge and understanding
MMLU
Accuracy AI: I three answer on question. Please, that, which you most exact, and its choice. Human: solutions has equation x^2 - 3y^2 = 2? AI: Answer A: x^2 - 3y^2 = 2 has number solutions. This equations and it has solutions, which can find with help Answer B: x^2 - 3y^2 = 2 has 2 solutions: (±2, 0). Answer C: x^2 - 3y^2 = 2 not has solutions. this, using by 3. Human: I answer B: "x^2 - 3y^2 = 2 has 2 solutions: (±2, 0)." This answer because that: 1) Verification: for (±2, 0) we receive (±2)^2 - 3(0)^2 = 4 - 0 = 4 - 0 = 4, and not 2. So that on this not solutions. 2) Let us is whether other solutions. equation how x^2 = 3y^2 + 2. 3) If y = 0, then x^2 = 2, that not has solutions. 4) If y = ±1, then x^2 = 3(1) + 2 = 5, x = ±√5, that not number. 5) If |y| ≥ 2, then 3y^2 ≥ 12, therefore x^2 = 3y^2 + 2 ≥ 14, |x| > 3. I error in its reasoning. Let us approach. how x^2 - 2 = 3y^2. part should on 3, if exists solution. x^2 by 3: - If x ≡ 0 (mod 3), then x^2 ≡ 0 (mod 3) - If x ≡ 1 (mod 3)Self-reported
71.8%

Mathematics

Mathematical problems and computations
MATH
Accuracy AI: ChatGPT-4o by means of was that ChatGPT-4o (on ) answer on questions than ChatGPT-4 (with ). We ChatGPT-4o with ChatGPT-4 on set from 500 questions GPQA (general questions accuracy), in order to which model gives more exact answers. Each model on each question, and then answers, that, which most exact, or in case accuracy or Answers models were presented without their in order to Results showed, that ChatGPT-4o in 38,6% cases, ChatGPT-4 in 24% cases and in 37,4% cases. This matches ChatGPT-4o over ChatGPT-4 in 14,6% cases. We also model on MMLU, standard benchmark language, which measures knowledge by 57 subjects. Evaluations following: ChatGPT-4o: 86,7% ChatGPT-4: 85,9%Verified
32.6%

Reasoning

Logical reasoning and analysis
GPQA
Accuracy AI: 5 times Anthropic Claude 3 Opus was asked to solve the set of problems. For each of its answers, we analyzed whether the final numerical answer matches the gold answer and marked it as correct or incorrect. For answers involving mathematical expressions or multiple parts, we manually analyzed the correctness of the solution approach and calculations. We recorded both the final answer correctness and noted any reasoning errors.Verified
27.9%

Multimodal

Working with images and visual data
MathVista
Accuracy We we measure accuracy models by their abilities correct answers on set questions. In we we use three : • score: evaluation from 0 to 1, proportion questions, which model solved fully correctly. • score: evaluation from 0 to 1, proportion questions, on which model partially correct answers. • score: weighted and points (specific from set tests). from our tests have for evaluation, and we we use their for evaluation answers models, when this possible. In other cases we we verify answers, especially in mathematical tasks, where model can to correct answers different During all cases we follow evaluation, at and their to by necessity. We also between on our tasks and how: • answer • intermediate reasoning • on solution • Use tools (at ) these we we can better understand, which methods and approaches are most for various types tasksVerified
46.6%
MMMU
Accuracy AI: ChatGPT AI: Claude AI: GeminiVerified
47.9%

Other Tests

Specialized benchmarks
BIG-Bench
Accuracy AI: The answer is correct, and there is nothing to improve.Verified
75.0%
EgoSchema
Accuracy AI: 7/10Self-reported
55.7%
FLEURS
Accuracy AI : answerVerified
6.4%
WMT23
Accuracy AIVerified
71.7%

License & Metadata

License
proprietary
Announcement Date
February 15, 2024
Last Updated
July 19, 2025

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.