Models
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge _These are extended-context endpoints for [MythoMax 13B](/gryphe/mythomax-l2-13b ...
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge _These are extended-context endpoints for [MythoMax 13B](/gryphe/mythomax-l2-13b ...
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge ...
PaLM 2 fine-tuned for chatbot conversations that help with code-related questions. ...
The Yi Large model was designed by 01.AI with the following usecases in mind: knowledge search, data classification, human-like chat bots, and customer service. It stands out for its multilingual pr ...
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch anno ...
Pixtral Large is a 124B open-weights multimodal model built on top of Mistral Large 2. The model is able to understand documents, charts and natural images. The mode ...
Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...
Llama 3.1 Sonar is Perplexity's latest model family. It surpasses their earlier Sonar models in cost-efficiency, speed, and performance. This is a normal offline LLM, but the [online version](/perpl ...
OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been ...
An older GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021. ...
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models:NousResearch/Nous-Capybara-7B-V1.9 [HuggingFaceH4/zephyr-7b-b...
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, LlamaGuard 1, it can do both prompt and respons ...
A pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see [Mix ...
Categories
Tags
- Multilingual chatbots
- Data classification
- Machine learning
- 01 ai
- Natural language processing
- Programming
- Customer support ai
- Data science
- Chatbots
- Yi large
- Knowledge retrieval
- Generative ai
- Ssm transformer
- Jamba 1 5 large
- Resource efficiency
- Ai21
- Document summarization
- Technology
- Context window
- Mamba based model
- Multilingual analysis
- 256k context window
- Jamba 1 5 mini
- Jamba
- Instruction tuning
- Enterprise optimization
- Large document processing
- Safety features
- Goliath 120b
- Model merging
- Large language model
- Mergekit
- Fine tuned llama
- Alpindale
- Prose quality
- Claude 3 alternative
- Qwen2 based
- Magnum 72b
- Roleplay data
- Roleplay
- New
- Document analysis
- Visual question answering
- Multimodal processing
- Amazon
- Real time interaction
- Nova lite v1
- Computer vision
- Translation
- Interactive chat model
- Nova micro v1
- Low latency text
- Text summarization tool
- Cost effective nlp
- Financial document processing
- Nova pro v1
- Video understanding
- Multimodal analysis
- Claude 3 quality
- Qwen 25 integration
- Magnum v4 72b
- Fine tuned model
- Anthracite org
- Prose generation
- Instant responsiveness
- Compact model
- Targeted performance
- Multimodal
- Anthropic
- Claude 3 haiku
- Deep understanding
- Complex task solving
- Claude 3 opus
- Advanced intelligence
- High fluency
- Claude 3 sonnet
- Scaled deployments
- Enterprise workloads
- Cost effective ai
- Benchmark results
- Real time moderation
- Claude 35 haiku
- Code completion
- Rapid response
- Data extraction
- Predictive analytics
- Data science expertise
- Visual processing
- Agentic tasks
- Claude 35 sonnet
- Autonomous systems
- Autonomous coding
- Text generation
- Baichuan3 turbo
- Baichuan
- Technologyweb
- Conversational systems
- Multilingual support
- Baichuan4
- Contextual understanding
- Conversational ai
- Ethics
- Multilingual ai
- High throughput
- Hardware efficiency
- Cohere
- Command r plus
- Low latency
- Performance upgrade
- Command r
- Complex workflows
- Conversational language tasks
- Retrieval augmented generation
- Code generation
- Instruction following
- Programmingscripting
- Long context
- Language tasks
- Command
- Language understanding
- Databricks
- Code pre training
- Dbrx
- Mixture of experts
- Multi token prediction
- Deepseek chat v3
- Deepseek
- Hot
- Load balancing strategy
- Multi head latent attention
- Mit licensed software
- Commercial use ai model
- Open source ai model
- Deepseek r1
- Technical report ai
- Voice assistants
- Free
- Discount
- Eva unit 01
- Roleplay model
- Eva llama 333 70b
- Creative finetune
- Narrative generation
- Storywriting ai
- Open source
- Robotics
- Complex instructions
- Fast ttft
- Gemini 20 flash exp
- Coding capabilities
- Multimodal understanding
- Thinking process generation
- Advanced reasoning
- Reasoning capabilities
- Gemini 20 flash thinking exp
- Experimental ai model
- Real time processing
- Gemini flash 15 8b
- Chat transcription
- Cost effective translation
- Visual understanding
- Content generation
- Gemini flash 15
- High frequency tasks
- Text editing
- Multimodal model
- Ai agents
- Gemini pro 15
- Text code response
- Gemini pro vision
- Image video processing
- Chat prompts
- Multiturn chat
- Gemini pro
- Gemma 2 27b it
- Reasoning
- Summarization
- Question answering
- Open source nlp
- Efficient ai
- Language model
- Performance optimization
- Gemma 2 9b it
- Code chatbot
- Palm 2 codechat bison 32k
- Developer support
- Coding qa
- Programming assistant
- Advanced small model
- Sota intelligence
- Multimodal inputs
- Gpt 4o mini
- Openai
- Text and image processing
- Gpt 4o
- Fast ai model
- Llama 2 integration
- Roleplay ai
- Extended context
- Fine tuning
- Gryphe
- Mythomax l2 13b
- Fictional narrative model
- Creative writing tool
- Mergekit language model
- Rifxonline
- Mn mag mell r1
- Roleplay storytelling
- Emotional intelligence
- Customer support
- Safety
- Inflection
- Chatbot safety
- Emotional intelligence chatbot
- Inflection 3 pi
- Roleplay scenarios
- Task optimization
- Precise guidelines
- Json output
- Inflection 3 productivity
- Multilingual
- Closed source comparison
- Meta llama
- Human evaluations
- Pre trained model
- Meta policy
- Llama 31 405b
- Multimodal integration
- Image captioning
- Visual linguistic ai
- Llama 32 11b vision
- Efficient language processing
- Multilingual text analysis
- Dialogue summarization
- Low resource nlp
- Llama 32 1b
- Dialogue generation
- Llama 32 3b
- Complex reasoning
- Text summarization
- Multilingual language model
- Multimodal ai
- Visual reasoning
- Llama 32 90b vision
- 70b language model
- Instruction tuned llm
- Multilingual text generation
- Llama 33 70b
- Multilingual dialogue model
- Llama guard 2 8b
- Safety classification
- Prompt response analysis
- Content moderation
- Llama 3 family
- Logical reasoning
- Code processing
- Advanced mathematics
- Phi 3 medium 128k
- Microsoft azure
- Mathematics tasks
- Phi 3 mini 128k
- Supervised fine tuning
- Dense transformer
- High quality datasets
- Phi 35 mini 128k
- Y
- L
- E
- U
- 4
- R
- K
- X
- G
- S
- M
- T
- O
- D
- I
- P
- C
- H
- A
- Q
- N
- Fast performance
- Model finetuning
- Mistral 7b
- Wizardlm 2 7b
- Model optimization
- Image understanding
- Minimax 01
- Vit mlp llm
- Text generation model
- Lightning attention
- Mistralai
- Codestral mamba
- Code reasoning model
- Transformer alternative
- Infinite sequence inference
- Large context window
- Parameter model
- Context length
- Industry standard ai
- Speed optimization
- Long context window
- Coding languages
- Mistral large
- Reasoning model
- Multilingual model
- Large context length
- 12b parameters
- Function calling
- Mistral nemo
- Mistral small
- Fast translation model
- Sentiment analysis ai
- Cost efficient nlp
- Fine tuned ai
- Large scale tasks
- Cost effective processing
- Mistral tiny
- Batch processing model
- Generative model
- Pretrained experts
- Sparse mixture of experts
- Feed forward networks
- Mixtral 8x7b
- Image to text
- Pixtral 12b
- Mistral ai
- Torrent release
- October 2023 data
- Natural image processing
- Pixtral large 2411
- Chart interpretation
- Chatbot integration
- Customer support automation
- Moonshot v1 8k
- Moonshot
- Education
- Semantic understanding
- Uncensored
- Intelligent dialogue
- Llama 3 lumimaid 70b
- Curated data
- Uncensored chatbot
- Language processing
- Improved dataset
- Finetune model
- Chat optimization
- Llama 31 lumimaid 8b
- Multi turn conversation
- Structured output
- Code generation skills
- Powerful steering capabilities
- Advanced agentic capabilities
- Nousresearch
- Hermes 3 llama 31 405b
- Hermes 3 llama 31 70b
- Roleplaying enhancement
- Science
- Chatgpt 4o latest
- Research evaluation tool
- Rate limited chatbot
- Dynamic language model
- Parallel function calling
- Reproducible outputs
- Gpt 35 turbo
- Improved instruction following
- Json mode
- Experimental model
- Rate limited
- O1 mini
- Phd level accuracy
- Stem optimization
- Advanced scientific computing
- O1 preview
- O1
- Chain of thought reasoning
- Reinforcement learning
- Conditioned reinforcement learning
- Mixed quality data
- Openchat
- Language models library
- Openchat 7b
- Offline ai model
- Cost efficient llm
- Perplexity
- High performance language model
- Llama 31 sonar large 128k chat
- Fast processing llm
- Llama 31 sonar small 128k chat
- Qwen
- Multilingual understanding
- Coding and reasoning
- Swiglu activation
- Qwen 2 7b
- Transformer model
- Video question answering
- Multilingual text recognition
- Mobile robot integration
- Qwen 2 vl 72b
- Qwen 2 vl 7b
- Structured data understanding
- Long context processing
- Qwen 25 72b
- Chatbot role play
- Qwen 25 7b
- Qwq 32b preview
- Safety considerations
- Ai reasoning
- Recursive loops
- Language mixing
- 40off
- Unique formatting
- L3 euryale 70b
- Creative roleplay
- Prompt adherence
- Spatial awareness
- Creative logic
- Strategic merge
- General knowledge
- Roleplaying model
- L3 lunaris 8b
- Storytelling ai
- Character interaction
- L31 euryale 70b
- L33 euryale 70b
- Character simulation
- Multilingual embeddings
- Semantic search
- Text embedding 3 large
- Similarity matching
- Text similarity matching
- Budget friendly nlp
- Cost effective embeddings
- Text embedding 3 small
- Creative writing model
- Engaging prose
- Rocinante 12b
- Thedrummer
- Undi95
- Mythomax l2 b13
- Recreation trial
- Remm slerp l2 13b
- Updated models
- Merge model
- Task arithmetic
- Toppy m 7b
- Uncensored ai
- Parameter blending
- Visual comprehension
- Object recognition
- X ai
- Style analysis
- Grok 2 vision