Type something to search...
DeepSeek V3

DeepSeek V3

  • 62.5K Context
  • 0.14/M Input Tokens
  • 0.28/M Output Tokens
Model Unavailable

DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models. For model details, please visit the DeepSeek-V3 repo for more information.

DeepSeek-V2 Chat is a conversational finetune of DeepSeek-V2, a Mixture-of-Experts (MoE) language model. It comprises 236B total parameters, of which 21B are activated for each token.

Compared with DeepSeek 67B, DeepSeek-V2 achieves stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to 5.76 times.

DeepSeek-V2 achieves remarkable performance on both standard benchmarks and open-ended generation evaluations.

Related Posts

DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the DeepSeek V3 m ...

DeepSeek: DeepSeek V3 0324
DeepSeek
62.5K context $0.27/M input tokens $1.1/M output tokens
FREE

DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the DeepSeek V3 m ...

DeepSeek: DeepSeek V3 0324 (free)
DeepSeek
62.5K context $0 input tokens $0 output tokens
FREE

DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase ...

DeepSeek: DeepSeek V3.1 (free)
DeepSeek
159.96K context $0 input tokens $0 output tokens

1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-eff ...

DeepSeek V3
DeepSeek
62.5K context $0.14/M input tokens $0.28/M output tokens
FREE

DeepSeek-R1 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (R ...

DeepSeek: R1 0528 (free)
DeepSeek
160K context $0 input tokens $0 output tokens

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The m ...

DeepSeek: DeepSeek R1 Distill Llama 70B
DeepSeek
128K context $0.23/M input tokens $0.69/M output tokens
Type something to search...