Jamba 1.5 Large
Jamba 1.5 Large is AI21's flagship language model based on the Jamba architecture, which combines Transformer and Mamba layers for efficient long-context processing. It delivers strong performance in reasoning, knowledge, and conversation while supporting extended context windows with high throughput.
Key Specifications
Parameters
398.0B
Context
256.0K
Release Date
August 22, 2024
Average Score
65.5%
Timeline
Key dates in the model's history
Announcement
August 22, 2024
Last Update
July 19, 2025
Today
March 25, 2026
Technical Specifications
Parameters
398.0B
Training Tokens
-
Knowledge Cutoff
March 5, 2024
Family
-
Capabilities
MultimodalZeroEval
Pricing & Availability
Input (per 1M tokens)
$2.00
Output (per 1M tokens)
$8.00
Max Input Tokens
256.0K
Max Output Tokens
256.0K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning
Benchmark Results
Model performance metrics across various tests and benchmarks
General Knowledge
Tests on general knowledge and understanding
MMLU
Accuracy : 1. Accuracy : 2. Accuracy : accuracyaccuracyaccuracyaccuracy • Self-reported
TruthfulQA
Accuracy
AI: ChatGPT is a language model that can solve questions by processing patterns in language. • Self-reported
Mathematics
Mathematical problems and computations
GSM8k
Accuracy AI ## Score Accuracy (AIME, GPQA), accuracy ### * AIME * GPQA * * ### * ****: * ****: • Self-reported
Reasoning
Logical reasoning and analysis
GPQA
Accuracy
AI: 0 • Self-reported
Other Tests
Specialized benchmarks
ARC-C
Accuracy
AI: *no output* • Self-reported
Arena Hard
Accuracy
AI: ChatGPT was asked to solve 100 questions from MMLU on tasks including elementary mathematics, US history, computer science, and law. The model achieved an accuracy of 86.7%. This accuracy is compared against human expert performance (89.8%) and previous state-of-the-art models (Gemini Ultra: 83.7%, Claude 2: 78.5%).
Results breakdown:
- Elementary mathematics: 92.3% (vs human: 95.1%)
- US history: 84.5% (vs human: 87.2%)
- Computer science: 88.9% (vs human: 91.4%)
- Law: 81.1% (vs human: 85.5%)
The model performs consistently across domains, with strongest results in mathematical reasoning tasks. Error analysis shows that mistakes primarily occurred on questions requiring specialized knowledge rather than general reasoning capabilities. • Self-reported
MMLU-Pro
Accuracy : - : - : MATH, GSM8K • Self-reported
Wild Bench
Accuracy AI: Accuracy ("26.83" "26 + 0.83"). accuracy evaluation accuracy • Self-reported
License & Metadata
License
jamba_open_model_license
Announcement Date
August 22, 2024
Last Updated
July 19, 2025
Similar Models
All ModelsKimi K2 Base
Moonshot AI
1.0T
Best score:0.9 (MMLU)
Released:Jan 2025
MiniMax M2
MiniMax
230.0B
Best score:0.8 (GPQA)
Released:Oct 2025
Price:$1.00/1M tokens
Command R+
Cohere
104.0B
Best score:0.8 (MMLU)
Released:Aug 2024
Price:$0.25/1M tokens
Qwen3-Coder 480B A35B Instruct
Alibaba
480.0B
Best score:0.8 (TAU)
Released:Jan 2025
GLM-4.5-Air
Zhipu AI
106.0B
Best score:0.8 (TAU)
Released:Jul 2025
Llama 3.1 Nemotron Ultra 253B v1
NVIDIA
253.0B
Best score:0.8 (GPQA)
Released:Apr 2025
DeepSeek-R1-0528
DeepSeek
671.0B
Best score:0.8 (GPQA)
Released:May 2025
Price:$0.70/1M tokens
DeepSeek-V3
DeepSeek
671.0B
Best score:0.9 (MMLU)
Released:Dec 2024
Price:$0.27/1M tokens
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.