Type something to search...
Qwen 2 7B Instruct

Qwen 2 7B Instruct

  • 32K Context
  • 0.054/M Input Tokens
  • 0.054/M Output Tokens
Model Unavailable

Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.

It features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.

For more details, see this blog post and GitHub repo.

Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT.

Related Posts

Qwen2 VL 72B is a multimodal LLM from the Qwen Team with the following key enhancements:SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-ar...

Qwen2-VL 72B Instruct
Qwen
32K context $0.4/M input tokens $0.4/M output tokens $0.578/K image tokens

Qwen2 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements:SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art...

Qwen2-VL 7B Instruct
Qwen
32K context $0.1/M input tokens $0.1/M output tokens $0.144/K image tokens

Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:Significantly more knowledge and has greatly improved capabiliti...

Qwen2.5 72B Instruct
Qwen
128K context $0.35/M input tokens $0.4/M output tokens

Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:Significantly more knowledge and has greatly improved capabilitie...

Qwen2.5 7B Instruct
Qwen
128K context $0.27/M input tokens $0.27/M output tokens

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:Signifi...

Qwen2.5 Coder 32B Instruct
Qwen
32K context $0.18/M input tokens $0.18/M output tokens