Type something to search...
Mixtral 8x7B (base)

Mixtral 8x7B (base)

  • 32K Context
  • 0.54/M Input Tokens
  • 0.54/M Output Tokens

A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see Mixtral 8x7B Instruct for an instruct-tuned model.

#moe

Related Posts

A 7.3B parameter Mamba-based model designed for code and reasoning tasks.Linear time inference, allowing for theoretically infinite sequence lengths 256k token context window Optimized for qu...

Mistral: Codestral Mamba
MistralAI
250K context $0.25/M input tokens $0.25/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest ...

Mistral: Mistral 7B Instruct
MistralAI
32K context $0.055/M input tokens $0.055/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2407
MistralAI
125K context $2/M input tokens $6/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chi ...

Mistral: Mistral Nemo
MistralAI
125K context $0.13/M input tokens $0.13/M output tokens

Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. ...

Mistral Small
MistralAI
31.25K context $0.2/M input tokens $0.6/M output tokens