Type something to search...
Mistral: Mistral Nemo

Mistral: Mistral Nemo

  • 125K Context
  • 0.13/M Input Tokens
  • 0.13/M Output Tokens
Model Unavailable

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.

It supports function calling and is released under the Apache 2.0 license.

Related Posts

Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mi ...

Ministral 3B
Mistralai
125K context $0.04/M input tokens $0.04/M output tokens

Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up ...

Ministral 8B
Mistralai
125K context $0.1/M input tokens $0.1/M output tokens

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version variants, and this is intended to ...

Mistral: Mistral 7B Instruct
Mistralai
32K context $0.055/M input tokens $0.055/M output tokens

This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than Mistral 7B, inspired by community work. It's best ...

Mistral Tiny
Mistralai
31.25K context $0.25/M input tokens $0.25/M output tokens

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its siz ...

Mistral: Mixtral 8x22B Instruct
Mistralai
64K context $0.9/M input tokens $0.9/M output tokens

A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructio ...

Mixtral 8x7B (base)
Mistralai
32K context $0.54/M input tokens $0.54/M output tokens