DeepSeek logo

DeepSeek R1 Distill Qwen 1.5B

DeepSeek

DeepSeek-R1 is a first-generation reasoning model built on DeepSeek-V3 (671 billion total parameters, 37 billion active parameters per token). The model uses large-scale reinforcement learning (RL) to improve chain-of-thought reasoning and logical thinking abilities, demonstrating high performance in math, coding, and multi-step reasoning tasks.

Key Specifications

Parameters
1.8B
Context
-
Release Date
January 20, 2025
Average Score
46.8%

Timeline

Key dates in the model's history
Announcement
January 20, 2025
Last Update
July 19, 2025
Today
March 25, 2026

Technical Specifications

Parameters
1.8B
Training Tokens
14.8T tokens
Knowledge Cutoff
-
Family
-
Capabilities
MultimodalZeroEval

Benchmark Results

Model performance metrics across various tests and benchmarks

Reasoning

Logical reasoning and analysis
GPQA
Diamond, Pass@1 Method Diamond — this 2-process solutions. On he problem on logical steps. On he sequentially these steps, mathematical Method Diamond was in help in and for reasoning. This approach three : 1. thinking and coding: Diamond first to code or execution 2. solution tasks: structured approach to complex with determination key and tasks on managed components. 3. on : In Diamond to solving, that makes its especially for mathematical evidence. Diamond demonstrates high accuracy on complex mathematical tasks, that from its efficiency in Pass@1 — probability correct solutions with first attemptsSelf-reported
33.8%

Other Tests

Specialized benchmarks
AIME 2024
Cons@64 AI: Translate text descriptions method analysis model AI:Self-reported
52.7%
LiveCodeBench
Pass@1 Metric for evaluation performance language models at solving tasks, requiring reasoning. itself probability that, that model correct answer with first attempts. Pass@1 usually is evaluated by means of generation several independent solutions for each tasks (for example, 100 or 200 solutions), and then computation tasks, for which model correct answer although would one times. Since large language model are their answers can at each Evaluation Pass@1 allows measure ability model sequentially find correct solutions without necessity attempts. This metric especially useful at evaluation performance on complex tasks reasoning, such how mathematical puzzles, programming or logical tasks, where error can lead to toSelf-reported
16.9%
MATH-500
Pass@1 Metric Pass@1 measures probability obtaining correct answer with first attempts. This main method evaluation in research by mathematical reasoning. In case with models, only one answer, Pass@1 accuracy. For computation Pass@1: 1. task k times (k various solutions) 2. Pass@1 how proportion correct answers among all k attempts Example: if from 100 attempts 75 correct answers, then Pass@1 = 0.75. This metric especially important, when we measure, how well model can obtain correct answer without additional attempts — that reflects real scenarios use, where user usually only one answerSelf-reported
83.9%

License & Metadata

License
mit
Announcement Date
January 20, 2025
Last Updated
July 19, 2025

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.