Type something to search...
Phi-3 Mini 128K Instruct

Phi-3 Mini 128K Instruct

  • 125K Context
  • 0.1/M Input Tokens
  • 0.1/M Output Tokens

Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing.

At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.

Related Posts

Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference ...

Phi-3 Medium 128K Instruct
Microsoft Azure
125K context $1/M input tokens $1/M output tokens

Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a ...

Phi-3.5 Mini 128K Instruct
Microsoft Azure
125K context $0.1/M input tokens $0.1/M output tokens

Microsoft Research Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed. At 1 ...

Microsoft: Phi 4
Microsoft Azure
16K context $0.07/M input tokens $0.14/M output tokens

WizardLM-2 7B is the smaller variant of Microsoft AI's latest Wizard model. It is the fastest and achieves comparable performance with existing 10x larger opensource leading models It is a finetune ...

WizardLM-2 7B
Microsoft Azure
31.25K context $0.055/M input tokens $0.055/M output tokens