Mistral Small 3 24B Instruct
Mistral Small 3 is a 24 billion parameter LLM distributed under the Apache-2.0 license. The model focuses on instruction following with low latency and high efficiency, maintaining performance comparable to larger models. It delivers fast and accurate responses for conversational agents, function calling, and domain-specific fine-tuning. Suitable for local inference when quantized, it competes with models 2-3x its size while using significantly fewer computational resources.
Key Specifications
Timeline
Technical Specifications
Pricing & Availability
Benchmark Results
Model performance metrics across various tests and benchmarks
Programming
Mathematics
Reasoning
Other Tests
License & Metadata
Similar Models
All ModelsMistral NeMo Instruct
Mistral AI
Magistral Small 2506
Mistral AI
Devstral Small 1.1
Mistral AI
Mistral Small
Mistral AI
Codestral-22B
Mistral AI
Mistral Small 3 24B Base
Mistral AI
Pixtral-12B
Mistral AI
Mistral Small 3.1 24B Instruct
Mistral AI
Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.