Mistral: Codestral Mamba
- 250K Context
- 0.25/M Input Tokens
- 0.25/M Output Tokens
- MistralAI
- Text 2 text
- 02 Dec, 2024
A 7.3B parameter Mamba-based model designed for code and reasoning tasks.
- Linear time inference, allowing for theoretically infinite sequence lengths
- 256k token context window
- Optimized for quick responses, especially beneficial for code productivity
- Performs comparably to state-of-the-art transformer models in code and reasoning tasks
- Available under the Apache 2.0 license for free use, modification, and distribution