Type something to search...
Dolphin 2.6 Mixtral 8x7B 🐬

Dolphin 2.6 Mixtral 8x7B 🐬

  • 32K Context
  • 0.5/M Input Tokens
  • 0.5/M Output Tokens
Model Unavailable

This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning.

The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.

#moe #uncensored

Related Posts

Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of Mixtral 8x22B Instruct. It features a 64k ...

Dolphin 2.9.2 Mixtral 8x22B 🐬
Cognitivecomputations
64K context $0.9/M input tokens $0.9/M output tokens