DeepSeek logo

DeepSeek-V3.2-Exp

DeepSeek

DeepSeek-V3.2-Exp is an experimental iteration introducing DeepSeek Sparse Attention (DSA) to improve training and long-context inference efficiency while maintaining output quality on par with V3.1. Explores fine-grained sparse attention for processing extended sequences.

Key Specifications

Parameters
685.0B
Context
163.8K
Release Date
September 28, 2025
Average Score
85.0%

Timeline

Key dates in the model's history
Announcement / Last Update
September 28, 2025
Today
March 25, 2026

Technical Specifications

Parameters
685.0B
Training Tokens
-
Knowledge Cutoff
-
Family
-
Capabilities
MultimodalZeroEval

Pricing & Availability

Input (per 1M tokens)
$0.27
Output (per 1M tokens)
$0.41
Max Input Tokens
163.8K
Max Output Tokens
65.5K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning

Benchmark Results

Model performance metrics across various tests and benchmarks

Reasoning

Logical reasoning and analysis
GPQA
Mode reasoning (without tools)Self-reported
80.0%

Other Tests

Specialized benchmarks
SimpleQA
tool useSelf-reported
97.0%
AIME 2025
Pass@1 (Mode reasoning without tools)Self-reported
89.0%
MMLU-Pro
Mode reasoning (without tools)Self-reported
85.0%
Aider-Polyglot
Mode reasoning (without tools)Self-reported
74.0%

License & Metadata

License
mit
Announcement Date
September 28, 2025
Last Updated
September 28, 2025

Compare DeepSeek-V3.2-Exp

All comparisons

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.