Amazon logo

Nova Micro

Amazon

Amazon Nova Micro is a text-only model delivering the lowest latency at minimal cost. Optimized for speed and efficiency, it excels in tasks like text summarization, translation, content classification, code generation, and mathematical reasoning.

Key Specifications

Parameters
-
Context
128.0K
Release Date
November 20, 2024
Average Score
67.0%

Timeline

Key dates in the model's history
Announcement
November 20, 2024
Last Update
July 19, 2025
Today
March 25, 2026

Technical Specifications

Parameters
-
Training Tokens
-
Knowledge Cutoff
-
Family
-
Capabilities
MultimodalZeroEval

Pricing & Availability

Input (per 1M tokens)
$0.03
Output (per 1M tokens)
$0.14
Max Input Tokens
128.0K
Max Output Tokens
128.0K
Supported Features
Function CallingStructured OutputCode ExecutionWeb SearchBatch InferenceFine-tuning

Benchmark Results

Model performance metrics across various tests and benchmarks

General Knowledge

Tests on general knowledge and understanding
MMLU
0-shot Chain-of-Thought AI: 0-shot Chain-of-ThoughtSelf-reported
77.6%

Programming

Programming skills tests
HumanEval
pass@1 accuracy AI: GPT-4 Technical ReportSelf-reported
81.1%

Mathematics

Mathematical problems and computations
GSM8k
0-shot Chain-of-Thought AI: 0-shotSelf-reported
92.3%
MATH
0-shot Chain-of-Thought AI: 0-shot Chain-of-ThoughtSelf-reported
69.3%

Reasoning

Logical reasoning and analysis
DROP
6-shot Chain-of-Thought 6 (6-shot Chain-of-Thought) — : 1. Chain-of-Thought (CoT): 2. Few-shot learning: (6) 6-shot CoT : - accuracy : -Self-reported
79.3%
GPQA
0-shot Chain-of-Thought AI: 0-shot Chain-of-ThoughtSelf-reported
40.0%

Other Tests

Specialized benchmarks
ARC-C
0-shot AI: ChatGPT : "2x² + 3x - 2 = 0", : [] AI: []Self-reported
90.2%
BBH
3-shot Chain-of-Thought Chain-of-Thought (CoT) — 3-shot CoT 3 3-shot CoTSelf-reported
79.5%
BFCL
accuracySelf-reported
56.2%
CRAG
accuracySelf-reported
43.1%
FinQA
0-shot accuracySelf-reported
65.2%
IFEval
0-shot AI: : [] AI: []Self-reported
87.2%
SQuALITY
ROUGE-L ROUGE-L (LCS) LCS n-ROUGE-L n-Self-reported
18.8%
Translation en→Set1 COMET22
COMET22 (Conceptual understanding and multi-step explanation generation using transformers) COMET22 LLM COMET22 COMET22 thinking. Score LLM COMET22Self-reported
88.5%
Translation en→Set1 spBleu
spBleu BLEU. spBleu BLEU, spBleu spBleu evaluationBLEUSelf-reported
40.2%
Translation Set1→en COMET22
COMET22 thinking (COMET22) — 22 (). COMET22, (1) (2) (3) (GPT-4, Claude, PaLM-2) (Falcon, Llama-2, Mistral). (28% 25%). (33%) (35%), (23%). COMET22 COMET22Self-reported
88.7%
Translation Set1→en spBleu
spBleu spBleu (BLEU) — BLEU, BLEU, n-spBleu : 1) 2) () 3)Self-reported
42.6%

License & Metadata

License
proprietary
Announcement Date
November 20, 2024
Last Updated
July 19, 2025

Similar Models

All Models

Recommendations are based on similarity of characteristics: developer organization, multimodality, parameter size, and benchmark performance. Choose a model to compare or go to the full catalog to browse all available AI models.