#Knowledge Distillation

Logo of Awesome-Efficient-LLM
Awesome-Efficient-LLM
Discover a curated list of cutting-edge research papers on improving the efficiency of Large Language Models (LLMs) through methods such as network pruning, knowledge distillation, and quantization. This resource provides insights into accelerating inference, optimizing architectures, and enhancing hardware performance, offering valuable information for both academic and industry professionals.
Logo of Knowledge-Distillation-Toolkit
Knowledge-Distillation-Toolkit
The Knowledge Distillation Toolkit is a solution for compressing machine learning models with knowledge distillation, tailored for use with PyTorch and PyTorch Lightning. The toolkit supports the implementation of teacher and student models, data loaders for both training and validation processes, and an inference pipeline for performance evaluation. Designed to minimize model size while maintaining accuracy, it enables efficient knowledge transfer from a larger, complex model to a smaller student model. The toolkit also offers flexible configuration options such as customizable architectures, optimization methods, and learning rate scheduling to refine the model compression workflow.
Logo of mt-dnn
mt-dnn
This PyTorch-based toolkit provides a solid framework for natural language understanding through Multi-Task Deep Neural Networks. It integrates advanced adversarial training for language models and hybrid neural networks, utilizing prominent models such as BERT for enhanced performance. The package facilitates efficient training on various NLP tasks, including GLUE, SciTail, and SNLI, and robust fine-tuning options. In light of model hosting policy updates, ongoing improvements ensure comprehensive support for training, domain adaptation, and embedding extraction. Discover this versatile, open-source resource aimed at enhancing AI language comprehension.
Logo of Efficient-Computing
Efficient-Computing
Explore methods developed by Huawei Noah's Ark Lab for efficient computing, emphasizing data-efficient model compression and binary networks. The repository includes advancements in pruning (e.g., GAN-pruning), model quantization (e.g., DynamicQuant), and self-supervised learning (e.g., FastMIM). Discover training acceleration techniques and efficient object detection methods like Gold-YOLO. Also, find efficient solutions for low-level vision tasks with models such as IPG. These resources are designed to optimize neural network performance, focusing on minimal training data use.
Logo of torchdistill
torchdistill
Explore a modular framework that provides cutting-edge knowledge distillation techniques through simple YAML configuration. This framework reduces Python code dependency and aids in extracting intermediate representations, making deep learning experiments more reproducible. It is well-suited for training models without teacher models and supports various tasks, such as image and text classification. Use pre-configured examples and access models through PyTorch Hub efficiently. Ideal for researchers and developers looking for efficient deep learning solutions.
Logo of distill-sd
distill-sd
Examine the innovative knowledge-distilled variants of Stable Diffusion which provide improved speed and reduced size while preserving image integrity. Learn about the architecture that optimizes the process with advanced distillation methods, lowering VRAM use and increasing efficiency. Designed for those enhancing specific techniques via fine-tuning or LoRA training, these models, still in progress, represent an advancement in efficient image creation.