llm-action
Explore an open-source project offering detailed guidance on LLM training and fine-tuning with NVIDIA GPUs and Ascend NPUs. The resources cover parameter-efficient methods like LoRA and QLoRA and introduce distributed training techniques. Access practical examples using frameworks like HuggingFace PEFT, DeepSpeed, and Megatron-LM to enhance large language models. Understand distributed AI framework complexities and learn effective LLM deployment strategies.