Type something to search...

Mistralai

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2407
MistralAI
125K context $2/M input tokens $6/M output tokens

Pixtral Large is a 124B open-weights multimodal model built on top of Mistral Large 2. The model is able to understand documents, charts and natural images. The mode ...

Mistral: Pixtral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens $0.003/M image tokens

A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see [Mix ...

Mixtral 8x7B (base)
MistralAI
32K context $0.54/M input tokens $0.54/M output tokens

Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. ...

Mistral Small
MistralAI
31.25K context $0.2/M input tokens $0.6/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best used for larg ...

Mistral Tiny
MistralAI
31.25K context $0.25/M input tokens $0.25/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest ...

Mistral: Mistral 7B Instruct
MistralAI
32K context $0.055/M input tokens $0.055/M output tokens

A 7.3B parameter Mamba-based model designed for code and reasoning tasks.Linear time inference, allowing for theoretically infinite sequence lengths 256k token context window Optimized for qu...

Mistral: Codestral Mamba
MistralAI
250K context $0.25/M input tokens $0.25/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chi ...

Mistral: Mistral Nemo
MistralAI
125K context $0.13/M input tokens $0.13/M output tokens

The first image to text model from Mistral AI. Its weight was launched via torrent per their tradition: https://x.com/mistralai/status/1833758285167722836 ...

Mistral: Pixtral 12B
MistralAI
4K context $0.1/M input tokens $0.1/M output tokens $0.144/K image tokens