Details for Mixtral-8x7B

Description: A high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.

Additional Information: The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. It outperforms Llama 2 70B on many benchmarks. As of December 2023, it is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs.

Link: dub.sh/localaimodels-mixtral