Type something to search...
DeepSeek: R1 (free)

DeepSeek: R1 (free)
FREE

  • 160K Context
  • 0 Input Tokens
  • 0 Output Tokens

DeepSeek R1 is here: Performance on par with OpenAI o1, but open-sourced and with fully open reasoning tokens. It’s 671B parameters in size, with 37B active in an inference pass.

Fully open-source model & technical report.

MIT licensed: Distill & commercialize freely!

Related Posts

1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-eff ...

DeepSeek V3
DeepSeek
62.5K context $0.14/M input tokens $0.28/M output tokens

DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported ...

DeepSeek V3
DeepSeek
62.5K context $0.14/M input tokens $0.28/M output tokens

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The m ...

DeepSeek: DeepSeek R1 Distill Llama 70B
DeepSeek
128K context $0.23/M input tokens $0.69/M output tokens
FREE

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The m ...

DeepSeek: R1 Distill Llama 70B (free)
DeepSeek
128K context $0 input tokens $0 output tokens

DeepSeek R1 Distill Llama 8B is a distilled large language model based on Llama-3.1-8B-Instruct, using outputs from DeepSeek R1. The mode ...

DeepSeek: R1 Distill Llama 8B
DeepSeek
31.25K context $0.04/M input tokens $0.04/M output tokens

DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on Qwen 2.5 Math 1.5B, using outputs from [DeepSeek R1](/deepseek/deepseek-r1 ...

DeepSeek: R1 Distill Qwen 1.5B
DeepSeek
128K context $0.18/M input tokens $0.18/M output tokens
Type something to search...