Type something to search...

Related Posts

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mi ...

Ministral 3B
Mistralai
125K context $0.04/M input tokens $0.04/M output tokens

Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up ...

Ministral 8B
Mistralai
125K context $0.1/M input tokens $0.1/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to ...

Mistral: Mistral 7B Instruct
Mistralai
32K context $0.055/M input tokens $0.055/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, P ...

Mistral: Mistral Nemo
Mistralai
125K context $0.13/M input tokens $0.13/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best ...

Mistral Tiny
Mistralai
31.25K context $0.25/M input tokens $0.25/M output tokens

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its siz ...

Mistral: Mixtral 8x22B Instruct
Mistralai
64K context $0.9/M input tokens $0.9/M output tokens