Type something to search...
Ministral 3B

Ministral 3B

  • 125K Context
  • 0.04/M Input Tokens
  • 0.04/M Output Tokens
Model Unavailable

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.

Related Posts

Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up ...

Ministral 8B
Mistralai
125K context $0.1/M input tokens $0.1/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to ...

Mistral: Mistral 7B Instruct
Mistralai
32K context $0.055/M input tokens $0.055/M output tokens

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, P ...

Mistral: Mistral Nemo
Mistralai
125K context $0.13/M input tokens $0.13/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best ...

Mistral Tiny
Mistralai
31.25K context $0.25/M input tokens $0.25/M output tokens

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its siz ...

Mistral: Mixtral 8x22B Instruct
Mistralai
64K context $0.9/M input tokens $0.9/M output tokens

A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructio ...

Mixtral 8x7B (base)
Mistralai
32K context $0.54/M input tokens $0.54/M output tokens