Type something to search...
Mistral Large 2411

Mistral Large 2411

  • 125K Context
  • 2/M Input Tokens
  • 6/M Output Tokens

This is Mistral AI’s flagship model, Mistral Large 2 (version mistral-large-2407). It’s a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement here.

It supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents.

Related Posts

A 7.3B parameter Mamba-based model designed for code and reasoning tasks.Linear time inference, allowing for theoretically infinite sequence lengths 256k token context window Optimized for qu...

Mistral: Codestral Mamba
MistralAI
250K context $0.25/M input tokens $0.25/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest ...

Mistral: Mistral 7B Instruct
MistralAI
32K context $0.055/M input tokens $0.055/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2407
MistralAI
125K context $2/M input tokens $6/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chi ...

Mistral: Mistral Nemo
MistralAI
125K context $0.13/M input tokens $0.13/M output tokens

Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. ...

Mistral Small
MistralAI
31.25K context $0.2/M input tokens $0.6/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best used for larg ...

Mistral Tiny
MistralAI
31.25K context $0.25/M input tokens $0.25/M output tokens