Type something to search...
Qwen2.5 Coder 32B Instruct

Qwen2.5 Coder 32B Instruct

  • 32K Context
  • 0.18/M Input Tokens
  • 0.18/M Output Tokens
Model Unavailable

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:

  • Significantly improvements in code generation, code reasoning and code fixing.
  • A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.

To read more about its evaluation results, check out Qwen 2.5 Coder’s blog.

Related Posts

Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning. It features SwiGLU activation, attention QKV ...

Qwen 2 7B Instruct
Qwen
32K context $0.054/M input tokens $0.054/M output tokens

Qwen2 VL 72B is a multimodal LLM from the Qwen Team with the following key enhancements:SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-ar...

Qwen2-VL 72B Instruct
Qwen
32K context $0.4/M input tokens $0.4/M output tokens $0.578/K image tokens

Qwen2 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements:SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art...

Qwen2-VL 7B Instruct
Qwen
32K context $0.1/M input tokens $0.1/M output tokens $0.144/K image tokens

Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:Significantly more knowledge and has greatly improved capabiliti...

Qwen2.5 72B Instruct
Qwen
128K context $0.35/M input tokens $0.4/M output tokens

Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2:Significantly more knowledge and has greatly improved capabilitie...

Qwen2.5 7B Instruct
Qwen
128K context $0.27/M input tokens $0.27/M output tokens