Key Specifications
Parameters
52.0B
Context
256.1K
Release Date
August 22, 2024
Average Score
56.1%
Timeline
Key dates in the model's history
Announcement
August 22, 2024
Last Update
July 19, 2025
Today
March 25, 2026
Technical Specifications
Parameters
52.0B
Training Tokens
-
Knowledge Cutoff
March 5, 2024
Family
-
Capabilities
MultimodalZeroEval
Pricing & Availability
Input (per 1M tokens)
$0.20
Output (per 1M tokens)
$0.40
Max Input Tokens
256.1K
Max Output Tokens
256.1K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning
Benchmark Results
Model performance metrics across various tests and benchmarks
General Knowledge
Tests on general knowledge and understanding
MMLU
Accuracy accuracy evaluation : 1. Accuracy : 2. Accuracy : evaluation • Self-reported
TruthfulQA
Accuracy
AI
I'm sorry, but there seems to be very limited text to translate. The only word provided is "Accuracy" which I've translated as "Accuracy". If you'd like me to translate a more substantial text about a method of AI model analysis, please provide the complete text. • Self-reported
Mathematics
Mathematical problems and computations
GSM8k
Accuracy
AI: 2 / 2 • Self-reported
Reasoning
Logical reasoning and analysis
GPQA
Accuracy
AI: User input querying or requesting information about a specific topic or concept → Analysis of whether the response contains factually accurate information.
This criterion assesses whether the information provided by the model is factually correct and free from errors. Evaluators should consider:
1. Factual correctness: Does the response contain verifiably true information?
2. Absence of hallucinations: Does the model avoid making up information that isn't true?
3. Precision: Is the information specific and detailed where appropriate?
4. Up-to-date knowledge: Does the information reflect current understanding (within the model's training cutoff)?
5. Handling of uncertainty: Does the model appropriately express uncertainty when information is incomplete or contested?
For example, when asked about a scientific concept, a response should include accurate definitions, correct explanations of processes, proper attribution of discoveries, and factually sound examples. • Self-reported
Other Tests
Specialized benchmarks
ARC-C
Accuracy
AI: [A detailed explanation of the algorithm's processing steps] • Self-reported
Arena Hard
Accuracy accuracy ground-truth-accuracy 5-: - 5: ground-truth. - 4: 3: 2: 1: (""), ground-truth • Self-reported
MMLU-Pro
Accuracy (chain-of-thought), accuracy : - • Self-reported
Wild Bench
Accuracy
AI • Self-reported
License & Metadata
License
jamba_open_model_license
Announcement Date
August 22, 2024
Last Updated
July 19, 2025
Similar Models
All ModelsDeepSeek R1 Distill Qwen 14B
DeepSeek
14.8B
Best score:0.6 (GPQA)
Released:Jan 2025
Llama-3.3 Nemotron Super 49B v1
NVIDIA
49.9B
Best score:0.7 (GPQA)
Released:Mar 2025
DeepSeek R1 Distill Llama 70B
DeepSeek
70.6B
Best score:0.7 (GPQA)
Released:Jan 2025
Price:$0.10/1M tokens
DeepSeek R1 Distill Qwen 32B
DeepSeek
32.8B
Best score:0.6 (GPQA)
Released:Jan 2025
Price:$0.12/1M tokens
Gemma 2 27B
27.2B
Best score:0.8 (MMLU)
Released:Jun 2024
Phi-3.5-MoE-instruct
Microsoft
60.0B
Best score:0.9 (ARC)
Released:Aug 2024
Magistral Small 2506
Mistral AI
24.0B
Best score:0.7 (GPQA)
Released:Jun 2025
Qwen3 30B A3B
Alibaba
30.5B
Best score:0.7 (GPQA)
Released:Apr 2025
Price:$0.10/1M tokens
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.