DeepSeek R1 Distill Llama 8B
DeepSeek-R1 is a first-generation reasoning model built on DeepSeek-V3 (671 billion total parameters, 37 billion activated per token). It uses large-scale reinforcement learning (RL) to improve chain-of-thought reasoning and logical thinking abilities, demonstrating high performance in mathematical tasks, coding, and multi-step reasoning.
Key Specifications
Parameters
8.0B
Context
-
Release Date
January 20, 2025
Average Score
64.4%
Timeline
Key dates in the model's history
Announcement
January 20, 2025
Last Update
July 19, 2025
Today
March 25, 2026
Technical Specifications
Parameters
8.0B
Training Tokens
14.8T tokens
Knowledge Cutoff
-
Family
-
Capabilities
MultimodalZeroEval
Benchmark Results
Model performance metrics across various tests and benchmarks
Reasoning
Logical reasoning and analysis
GPQA
Diamond, Pass@1 Evaluation efficiency tasks with first times, when model gives correct final answer in first attempt, without any-or additional prompts or iterations. This strict score, which measures, how well exactly model can solve task with first attempts. Diamond relates to to methodology evaluation, which on abilities model solve complex tasks with first attempts. Pass@1 means percentage tasks, which model solved correctly with first times. This score for tasks, where not attempts or where accuracy first answer critically important • Self-reported
Other Tests
Specialized benchmarks
AIME 2024
Cons@64 In Cons@64 we approach Chain-of-Thought (CoT), "thinking through ", that, how people solve mathematical tasks: various errors and to In our method LLM (model) solves task k times with various Then we answers by and we analyze chains reasoning, in order to determine, is whether indeed solution. We we verify each chain reasoning, in order to identify errors, specific steps, where model errors. On GPQA results Cons@64 with experts-people, achieving accuracy 0.79 on language and 0.81 on language, by comparison with 0.79 at experts. On MMLU results Cons@64 even evaluation people • Self-reported
LiveCodeBench
Pass@1 Metric Pass@1 measures proportion tasks, which model solves correctly with first attempts. This metric performance, in which model or correct answer with first attempts, or no, when user on answer model, not several attempts or iterations. This metric especially useful in cases: - Evaluation performance model in scenarios use, where user, total, first answer model - Understanding abilities model solve tasks without additional attempts - easily metrics for comparison models - level performance for methods such how (self-consistency) or best from k (best-of-k) Pass@1 is strict metric, since requires from model demonstration fully correct reasoning and output for one attempt. She/It especially important for understanding performance model in conditions resources or time • Self-reported
MATH-500
Pass@1
AI • Self-reported
License & Metadata
License
mit
Announcement Date
January 20, 2025
Last Updated
July 19, 2025
Similar Models
All ModelsDeepSeek R1 Distill Qwen 7B
DeepSeek
7.6B
Best score:0.5 (GPQA)
Released:Jan 2025
DeepSeek R1 Distill Qwen 1.5B
DeepSeek
1.8B
Best score:0.3 (GPQA)
Released:Jan 2025
Llama 3.1 Nemotron Nano 8B V1
NVIDIA
8.0B
Best score:0.5 (GPQA)
Released:Mar 2025
Phi 4 Mini Reasoning
Microsoft
3.8B
Best score:0.5 (GPQA)
Released:Apr 2025
DeepSeek R1 Distill Qwen 14B
DeepSeek
14.8B
Best score:0.6 (GPQA)
Released:Jan 2025
DeepSeek R1 Distill Llama 70B
DeepSeek
70.6B
Best score:0.7 (GPQA)
Released:Jan 2025
Price:$0.10/1M tokens
DeepSeek R1 Distill Qwen 32B
DeepSeek
32.8B
Best score:0.6 (GPQA)
Released:Jan 2025
Price:$0.12/1M tokens
DeepSeek-V3.2 (Non-thinking)
DeepSeek
685.0B
Released:Nov 2025
Price:$0.28/1M tokens
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.