Type something to search...

Models

One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge _These are extended-context endpoints for [MythoMax 13B](/gryphe/mythomax-l2-13b ...

MythoMax 13B (extended)
Gryphe
8K context $1.125/M input tokens $1.125/M output tokens
FREE

One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge _These are extended-context endpoints for [MythoMax 13B](/gryphe/mythomax-l2-13b ...

MythoMax 13B (free)
Gryphe
8K context $0 input tokens $0 output tokens

A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge ...

ReMM SLERP 13B (extended)
Undi95
4K context $1.125/M input tokens $1.125/M output tokens

PaLM 2 fine-tuned for chatbot conversations that help with code-related questions. ...

Google: PaLM 2 Code Chat 32k
Google
31.99K context $1/M input tokens $2/M output tokens

The Yi Large model was designed by 01.AI with the following usecases in mind: knowledge search, data classification, human-like chat bots, and customer service. It stands out for its multilingual pr ...

01.AI: Yi Large
01 ai
32K context $3/M input tokens $3/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...

Mistral Large 2407
MistralAI
125K context $2/M input tokens $6/M output tokens

Pixtral Large is a 124B open-weights multimodal model built on top of Mistral Large 2. The model is able to understand documents, charts and natural images. The mode ...

Mistral: Pixtral Large 2411
MistralAI
125K context $2/M input tokens $6/M output tokens $0.003/M image tokens

Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...

Perplexity: Llama 3.1 Sonar 70B
Perplexity
128K context $1/M input tokens $1/M output tokens

Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...

Perplexity: Llama 3.1 Sonar 8B
Perplexity
128K context $0.2/M input tokens $0.2/M output tokens

OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been ...

OpenChat 3.5 7B
Openchat
8K context $0.055/M input tokens $0.055/M output tokens

An older GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021. ...

OpenAI: GPT-3.5 Turbo 16k (older v1106)
OpenAI
16K context $1/M input tokens $2/M output tokens
FREE

A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models:NousResearch/Nous-Capybara-7B-V1.9 [HuggingFaceH4/zephyr-7b-b...

Toppy M 7B (free)
Undi95
4K context $0 input tokens $0 output tokens

This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, LlamaGuard 1, it can do both prompt and respons ...

Meta: LlamaGuard 2 8B
Meta Llama
8K context $0.18/M input tokens $0.18/M output tokens

A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see [Mix ...

Mixtral 8x7B (base)
MistralAI
32K context $0.54/M input tokens $0.54/M output tokens
Tags