DeepSeek: R1 Distill Qwen 1.5B
- 128K Context
- 0.18/M Input Tokens
- 0.18/M Output Tokens
- DeepSeek
- Text 2 text
- 07 Feb, 2025
DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on Qwen 2.5 Math 1.5B, using outputs from DeepSeek R1. It’s a very small and efficient model which outperforms GPT 4o 0513 on Math Benchmarks.
Other benchmark results include:
- AIME 2024 pass@1: 28.9
- AIME 2024 cons@64: 52.7
- MATH-500 pass@1: 83.9
The model leverages fine-tuning from DeepSeek R1’s outputs, enabling competitive performance comparable to larger frontier models.