#Efficiency

Logo of LLM-Pruner
LLM-Pruner
Explore LLM-Pruner, an efficient tool for structurally pruning large language models with minimal data. Supports models like Llama, Vicuna, and BLOOM, focusing on preserving multi-task ability and enhancing performance, now including GQA and Llama3 series.
Logo of hardware-aware-transformers
hardware-aware-transformers
Explore HAT's ability to leverage Hardware-Aware Transformers to boost natural language processing efficiency. The project offers PyTorch code and includes 50 pre-trained models that aid in locating optimized solutions for distinct hardware, cutting search costs by over 10000 times. HAT provides up to triple the speed and a 3.7-fold reduction in model size with no performance detriment. Featuring latency feedback for hardware like Raspberry Pi and Intel Xeon, HAT presents a cutting-edge method for optimizing machine translation tasks, delivering superior performance across various devices.
Logo of YOLOv5-Lite
YOLOv5-Lite
YOLOv5-Lite delivers a streamlined and optimized version of YOLOv5, focusing on reduced computational requirements and accelerated inference times. Ideal for edge devices, it incorporates ablation experiments that result in decreased memory usage and fewer parameters. Key improvements include channel shuffling and an updated YOLOv5 head, maintaining at least 10 FPS on devices such as Raspberry Pi. By removing the Focus layer and refining model quantization, deployment becomes more accessible. Comparative analyses reveal superior inference speed and model efficiency across multiple platforms, making it an effective choice for resource-constrained environments.
Logo of wtpsplit
wtpsplit
The wtpsplit project facilitates efficient text segmentation into sentences or semantic units in 85 languages. Leveraging advanced SaT models, it enhances performance while lowering computational needs compared to earlier WtP models. Offering ONNX support for faster processing and LoRA modules for domain-specific or stylistic adjustments, it is well-suited for diverse uses including paragraph segmentation. Seamlessly compatible with platforms like HuggingFace, it is a valuable tool for academic and development settings seeking adaptable text segmentation solutions.
Logo of LLM_MultiAgents_Survey_Papers
LLM_MultiAgents_Survey_Papers
This survey offers an in-depth look at LLM-based multi-agent systems, organizing research into frameworks, orchestration, problem-solving, world simulation, and datasets. It explores the architecture and efficiency, showing their potential in fields like software development and embodied agents. Regular updates ensure the inclusion of the latest studies. The project provides insights on how LLM agents enhance problem-solving and simulate complex systems, serving as a valuable resource for understanding the dynamics of large language models in multi-agent contexts.
Logo of hiera
hiera
Hiera emerges as a streamlined vision transformer, offering superior performance in image and video tasks through fast inference and MAE pretraining. This model is available on platforms such as Torch Hub and Hugging Face Hub, enabling seamless integration into various projects.
Logo of Efficient-LLMs-Survey
Efficient-LLMs-Survey
The survey systematically reviews efficiency challenges and solutions for LLMs, offering a clear taxonomy in model-centric, data-centric, and system domains. Recognizing the computational demands of LLMs, it underscores the importance of techniques like model compression, quantization, parameter pruning, and efficient tuning. This resourceful overview aims to aid researchers and practitioners in advancing LLM efficiency without overstating or using subjective descriptions.
Logo of LTSF-Linear
LTSF-Linear
The LTSF-Linear project introduces a set of linear models that enhance time series forecasting capabilities, surpassing the performance of traditional Transformers. These include the Linear, NLinear, and DLinear models, crafted to effectively accommodate trend, seasonality, and distribution variations. Offering high efficiency with low memory and parameter requirements, these models also provide interpretability via weight visualization. The project includes well-documented Python implementations and benchmarks, supporting both univariate and multivariate forecasting tasks. The models efficiently streamline the training process and enable quick inference, showing notable improvements over existing methods in capturing temporal dynamics.
Logo of cognitive-load
cognitive-load
Discover how effective cognitive load management can simplify software development, reducing code complexity. Learn about intrinsic and extraneous cognitive loads and practical strategies like meaningful variable naming, opting for composition over inheritance, and achieving a balance between deep and shallow modules. Review examples of complex conditionals and the challenges of framework dependency, and understand why monolithic designs can often reduce cognitive burden better than microservices. Improve developer efficiency by applying these insights to maintain clarity in code and minimize needless abstraction.
Logo of offsite-tuning
offsite-tuning
Offsite-Tuning presents an innovative transfer learning framework designed to enhance privacy and computational efficiency. It enables the adaptation of large-scale foundation models to specific tasks without requiring full model access, effectively addressing traditional cost and privacy concerns. A lightweight adapter and a compressed emulator are provided for local fine-tuning, maintaining accuracy while significantly improving speed and reducing memory usage. This approach is validated on various large language and vision models, providing a practical solution for environments prioritizing privacy and resource constraints.
Logo of TextPruner
TextPruner
Learn efficient techniques to reduce the size and increase the speed of language models without retraining. TextPruner supports models like BERT and RoBERTa, maintaining performance in NLP tasks. Use it as a Python package or CLI with examples provided. Access continuous updates and research for various language applications.
Logo of ESFT
ESFT
ESFT optimizes Large Language Models (LLMs) performance and efficiency using Mixture-of-Experts (MoE) architecture. By focusing on task-relevant components, it reduces resource and storage needs and enhances model adaptability to various datasets. This method suits industries looking for efficient LLM deployment with specialized tuning. Recently accepted at EMNLP 2024, ESFT provides open-source training code for integration and testing on personal models and data, facilitating effective model customization with decreased computational demand.
Logo of VanillaNet
VanillaNet
VanillaNet presents a minimalist approach to neural networks, enhancing efficiency without sacrificing performance. Its architecture reduces complexity by eliminating layers, shortcuts, and attention mechanisms, which results in faster inference speeds. Achieving 81% Top-1 accuracy with 3.59ms latency on 11 layers, VanillaNet outperforms models like ResNet-50 and Swin-S. This approach redefines deep learning models with its optimal balance of speed, accuracy, and simplicity in tasks like detection and segmentation.