Details for LLaMa2-Chinese

Description: LLaMa 2 based model fine tuned to improve Chinese dialogue ability.

Additional Information: Llama 2 chat chinese fine-tuned model This model is fine-tuned based on Meta Platform’s Llama 2 Chat open source model. According to Meta, Llama 2 is trained on 2 trillion tokens, and the context length is increased to 4096. The chat model is fine-tuned using 1 million human labeled data. Since the Chinese alignment of Llama 2 itself is relatively weak, the developer, adopted a Chinese instruction set for fine-tuning to improve the Chinese dialogue ability. The Chinese fine-tuned models are available in 7B and 13B parameter sizes.

Link: dub.sh/localaimodels-llama2-chinese