Type something to search...
Phi-3 Mini 128K Instruct

Phi-3 Mini 128K Instruct

  • 125K Context
  • 0.1/M Input Tokens
  • 0.1/M Output Tokens
Model Unavailable

Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.

At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.

Related Posts

Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning a ...

Phi-3 Medium 128K Instruct
Microsoft
125K context $1/M input tokens $1/M output tokens

Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available website ...

Phi-3.5 Mini 128K Instruct
Microsoft
125K context $0.1/M input tokens $0.1/M output tokens

WizardLM-2 7B is the smaller variant of Microsoft AI's latest Wizard model. It is the fastest and achieves comparable performance with existing 10x larger opensource leading models It ...

WizardLM-2 7B
Microsoft
31.25K context $0.055/M input tokens $0.055/M output tokens

WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all ...

WizardLM-2 8x22B
Microsoft
64K context $0.5/M input tokens $0.5/M output tokens