Type something to search...
OpenChat 3.5 7B

OpenChat 3.5 7B

  • 8K Context
  • 0.055/M Input Tokens
  • 0.055/M Output Tokens
Model Unavailable

OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.

  • For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B.
  • For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B.

#open-source

Related Posts