Project Icon

Chinese-LLaMA-Alpaca-2

Advanced Chinese Language Models with Efficient Training and Human Preferences Alignment

Product Description:This project develops Chinese language models based on Llama-2 with an expanded Chinese vocabulary, bringing the Chinese LLaMA-2 and Alpaca-2 models. These models improve performance through incremental training on large Chinese datasets and employ FlashAttention-2 for training efficiency. Supporting up to 64K context length, they enhance semantics and instruction comprehension. RLHF integration allows for alignment with human preferences, reflecting values better. Open-source training and fine-tuning scripts are provided to deploy the models on local devices within the LLaMA ecosystem, enhancing user access and interaction.
Project Details