Awesome-Knowledge-Distillation-of-LLMs
Discover a detailed study on the knowledge distillation of large language models (LLMs) that highlights methods for transferring skills from models like GPT-4 to open-source alternatives such as LLaMA and Mistral. The survey thoroughly examines techniques for compressing models and using data augmentation for self-improvement. It offers structured insights into algorithms, skill refinement, and practical implementations across various domains. Regular updates provide a continuously updated collection of recent research advancements.