AI21 Labs logo

Jamba 1.5 Mini

AI21 Labs

Jamba 1.5 Mini is a compact language model by AI21, based on the Jamba hybrid Transformer-Mamba architecture. It offers efficient long-context processing with competitive performance in reasoning and general tasks, optimized for fast inference and cost-effective deployment.

Key Specifications

Parameters
52.0B
Context
256.1K
Release Date
August 22, 2024
Average Score
56.1%

Timeline

Key dates in the model's history
Announcement
August 22, 2024
Last Update
July 19, 2025
Today
March 25, 2026

Technical Specifications

Parameters
52.0B
Training Tokens
-
Knowledge Cutoff
March 5, 2024
Family
-
Capabilities
MultimodalZeroEval

Pricing & Availability

Input (per 1M tokens)
$0.20
Output (per 1M tokens)
$0.40
Max Input Tokens
256.1K
Max Output Tokens
256.1K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning

Benchmark Results

Model performance metrics across various tests and benchmarks

General Knowledge

Tests on general knowledge and understanding
MMLU
Accuracy accuracy evaluation : 1. Accuracy : 2. Accuracy : evaluationSelf-reported
69.7%
TruthfulQA
Accuracy AI I'm sorry, but there seems to be very limited text to translate. The only word provided is "Accuracy" which I've translated as "Accuracy". If you'd like me to translate a more substantial text about a method of AI model analysis, please provide the complete text.Self-reported
54.1%

Mathematics

Mathematical problems and computations
GSM8k
Accuracy AI: 2 / 2Self-reported
75.8%

Reasoning

Logical reasoning and analysis
GPQA
Accuracy AI: User input querying or requesting information about a specific topic or concept → Analysis of whether the response contains factually accurate information. This criterion assesses whether the information provided by the model is factually correct and free from errors. Evaluators should consider: 1. Factual correctness: Does the response contain verifiably true information? 2. Absence of hallucinations: Does the model avoid making up information that isn't true? 3. Precision: Is the information specific and detailed where appropriate? 4. Up-to-date knowledge: Does the information reflect current understanding (within the model's training cutoff)? 5. Handling of uncertainty: Does the model appropriately express uncertainty when information is incomplete or contested? For example, when asked about a scientific concept, a response should include accurate definitions, correct explanations of processes, proper attribution of discoveries, and factually sound examples.Self-reported
32.3%

Other Tests

Specialized benchmarks
ARC-C
Accuracy AI: [A detailed explanation of the algorithm's processing steps]Self-reported
85.7%
Arena Hard
Accuracy accuracy ground-truth-accuracy 5-: - 5: ground-truth. - 4: 3: 2: 1: (""), ground-truthSelf-reported
46.1%
MMLU-Pro
Accuracy (chain-of-thought), accuracy : -Self-reported
42.5%
Wild Bench
Accuracy AISelf-reported
42.4%

License & Metadata

License
jamba_open_model_license
Announcement Date
August 22, 2024
Last Updated
July 19, 2025

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.