Most commonly used open-source model
View DetailsThe 7B model released by Mistral AI, updated to version 0.2.
View Details🌋 A novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding.
View DetailsA high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.
View DetailsStarling is a large language model trained by reinforcement learning from AI feedback focused on improving chatbot helpfulness.
View DetailsA fine-tuned model based on Mistral with good coverage of domain and language.
View DetailsA large language model that can use text prompts to generate and discuss code.
View DetailsAn uncensored, fine-tuned model based on the Mixtral mixture of experts model that excels at coding tasks. Created by Eric Hartford
View DetailsUncensored LLaMa 2 Model by George Sung and Jarrad Hope
View DetailsA general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.
View DetailsGeneral use chat model based on Llama and Llama 2 with 2K to 16K context sizes.
View DetailsWizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford.
View DetailsMistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.
View DetailsZephyr beta is a fine-tuned 7B version of mistral that was trained on on a mix of publicly available, synthetic datasets.
View DetailsDeepSeek Coder is trained from scratch on both 87% code and 13% natural language in English and Chinese. Each of the models are pre-trained on 2 trillion tokens.
View DetailsLLaMa based code generation model focused on Python.
View DetailsThe powerful family of models by Nous Research that excels at scientific discussion and coding tasks.
View DetailsGeneral use models based on Llama and Llama 2 from Nous Research.
View DetailsModel focused on math and logic problems
View DetailsLLaMa 2 based model fine tuned to improve Chinese dialogue ability.
View DetailsOrca 2 is built by Microsoft research, and are a fine-tuned version of Meta's LLaMa 2 models. The model is designed to excel particularly in reasoning.
View DetailsA large language model built by the Technology Innovation Institute (TII) for use in summarization, text generation, and chat bots.
View DetailsGreat code generation model based on LLaMa2
View DetailsLlama 2 based model fine tuned on an Orca-style dataset. Originally called Free Willy.
View DetailsA high performing, bilingual language model.
View DetailsA 2.7B language model by Microsoft Research
View DetailsOpenHermes 2.5 is a 7B model fine-tuned by Teknium on Mistral with fully open datasets.
View DetailsA family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-1210.
View DetailsUncensored Llama2 based model with support for a 16K context window.
View DetailsFine-tuned Llama 2 model to answer medical questions based on an open source medical dataset.
View DetailsUncensored version of Wizard LM model
View DetailsStarCoder is a code generation model trained on 80+ programming languages.
View DetailsMultimodal model based on Mistral
View DetailsAn extension of Mistral to support context windows of 64K or 128K.
View DetailsMerge of the Open Orca OpenChat model and the Garage-bAInd Platypus 2 model. Designed for chat and code generation.
View DetailsA compact, yet powerful 10.7B large language model designed for single-turn conversation.
View DetailsA companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.
View DetailsSQLCoder is a code completion model fined-tuned on StarCoder for SQL generation tasks
View DetailsOpen-source medical large language model adapted from Llama 2 to the medical domain.
View DetailsA lightweight chat model allowing accurate, and responsive output without requiring high-end hardware.
View Details2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.
View DetailsAn extension of Llama 2 that supports a context of up to 128k tokens.
View DetailsThe TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.
View DetailsAn advanced language model crafted with 2 trillion bilingual tokens.
View Details🎩 Magicoder is a family of 7B parameter models trained on 75K synthetic instruction data using OSS-Instruct, a novel approach to enlightening LLMs with open-source code snippets.
View DetailsA high-performing code instruct model created by merging two existing code models.
View DetailsMistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.
View DetailsGeneral use 70 billion parameter model based on Llama 2.
View DetailsA language model created by combining two fine-tuned Llama 2 70B models into one.
View DetailsNexus Raven is a 13B instruction tuned model for function calling tasks.
View DetailsA robust conversational model designed to be used for both chat and instruct use cases.
View DetailsConversational model based on Llama 2 that performs competitively on various benchmarks.
View DetailsA top-performing mixture of experts model, fine-tuned with high-quality data.
View DetailsA 7B chat model fine-tuned with high-quality data and based on Zephyr.
View Details