Mistral: Mistral 7B Instruct
- 32K Context
- 0.055/M Input Tokens
- 0.055/M Output Tokens
- MistralAI
- Text 2 text
- 02 Dec, 2024
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.