Type something to search...
Mistral: Codestral Mamba

Mistral: Codestral Mamba

  • 250K Context
  • 0.25/M Input Tokens
  • 0.25/M Output Tokens

A 7.3B parameter Mamba-based model designed for code and reasoning tasks.

  • Linear time inference, allowing for theoretically infinite sequence lengths
  • 256k token context window
  • Optimized for quick responses, especially beneficial for code productivity
  • Performs comparably to state-of-the-art transformer models in code and reasoning tasks
  • Available under the Apache 2.0 license for free use, modification, and distribution

Related Posts

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to be the latest ...

Mistral: Mistral 7B Instruct
MistralAI
32K context $0.055/M input tokens $0.055/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2407
MistralAI
125K context $2/M input tokens $6/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chi ...

Mistral: Mistral Nemo
MistralAI
125K context $0.13/M input tokens $0.13/M output tokens

Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. ...

Mistral Small
MistralAI
31.25K context $0.2/M input tokens $0.6/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best used for larg ...

Mistral Tiny
MistralAI
31.25K context $0.25/M input tokens $0.25/M output tokens