Project Icon

ChatLM-mini-Chinese

Refined Chinese Generative Model Adapted for Low-Resource Training

Product DescriptionThe project focuses on training a compact 0.2B parameter Chinese generative language model suitable for environments with limited computational resources. The model training is feasible with just 4GB GPU and 16GB RAM, supporting extensive methods like data cleaning, tokenizer training, SFT fine-tuning, and RLHF optimization using open-source datasets. The Huggingface frameworks such as transformers and accelerate assist in the process. Further, the project facilitates uninterrupted training continuation and offers support for downstream task fine-tuning, with regular updates enhancing its utility for researchers in scalable language model implementations.
Project Details