Gemini 1.5 Pro vs Llama 4 Scout: Specs & Benchmark Comparison

CharacteristicGemini 1.5 ProLlama 4 Scout
CompanyGoogleMeta
Release DateMay 1, 2024April 5, 2025
Parameters109B
MultimodalYesYes
Context (input)2.1M10.0M
Context (output)8K10.0M
Input Price / 1M$2.50$0.18
Output Price / 1M$10.00$0.59
Average Score0.70.7
Benchmarks
MATH0.90.5
MMLU0.90.8
MMMU0.70.7
MGSM0.90.9
MathVista0.70.7
GPQA0.60.6
MMLU-Pro0.80.7

Visual Benchmark Comparison

Gemini 1.5 Pro
Llama 4 Scout
MATH0.9 vs 0.5
0.9
0.5
MMLU0.9 vs 0.8
0.9
0.8
MMMU0.7 vs 0.7
0.7
0.7
MGSM0.9 vs 0.9
0.9
0.9
MathVista0.7 vs 0.7
0.7
0.7
GPQA0.6 vs 0.6
0.6
0.6
MMLU-Pro0.8 vs 0.7
0.8
0.7

Verdict

Llama 4 Scout leads in 3 out of 4 comparison categories.

Overall Performance

Both models show comparable average scores: Gemini 1.5 Pro — 0.7, Llama 4 Scout — 0.7.

API Cost

Llama 4 Scout is 16.2x cheaper: input $0.18/1M vs $2.50/1M tokens.

Context Window

Llama 4 Scout supports a larger context: 10M vs 2M tokens.

Recency

Llama 4 Scout is newer: released 4/5/2025 vs 5/1/2024.

More About These Models

Frequently Asked Questions

Which is better for coding — Gemini 1.5 Pro or Llama 4 Scout?
Direct comparison on the SWE-Bench benchmark is not available. We recommend reviewing other metrics on the comparison page.
Which model is cheaper — Gemini 1.5 Pro or Llama 4 Scout?
Llama 4 Scout is cheaper for input: $0.18 per 1M tokens vs $2.50.
Which has a larger context window — Gemini 1.5 Pro or Llama 4 Scout?
Llama 4 Scout supports a larger context: 10,000,000 tokens vs 2,097,152.

The Gemini 1.5 Pro and Llama 4 Scout comparison is updated for 2026. Data includes benchmark results, API pricing, context window size and other specifications. For more detailed information, visit the Gemini 1.5 Pro or Llama 4 Scout page. See also the complete list of AI model comparisons.