#Quantization

Logo of text-generation-inference
text-generation-inference
Text Generation Inference facilitates the efficient deployment of Large Language Models like Llama and GPT-NeoX. It enhances performance with features such as Tensor Parallelism and token streaming, supporting hardware from Nvidia to Google TPU. Key optimizations include Flash Attention and quantization. It also supports customization options and distributed tracing for robust production use.
Logo of Chinese-Mixtral
Chinese-Mixtral
Discover Mixtral models tailored for Chinese language with enhanced architecture for effective long-text processing. The collection includes a base model amplified for Chinese and an Instruct model for interactive tasks. With native 32K context length, extendable to 128K, the models are ideal for tasks needing deep context, such as math reasoning and code generation. Offering open-source scripts for training and fine-tuning, users can easily adapt or develop custom models. Streamlined for integrations like transformers and llama.cpp, it facilitates quantification and deployment on local devices.
Logo of mlx-llm
mlx-llm
Explore real-time deployment of Large Language Models on Apple Silicon using MLX. Access a broad spectrum of models like LLaMA and Phi3, and leverage model quantization and embedding extraction for enhanced efficiency. Suitable for developers aiming to optimize LLMs on Apple devices or investigate fine-tuning with LoRA and RAG features.
Logo of PINTO_model_zoo
PINTO_model_zoo
Discover a repository that facilitates effortless inter-conversion of AI models among TensorFlow, PyTorch, ONNX, and other significant frameworks. With support for diverse quantization methods and optimization processes, this project enhances model performance across platforms like EdgeTPU and CoreML. It encourages community contributions for sample codes while keeping you informed on the progress in model conversion techniques for streamlined deployment.
Logo of Efficient-LLMs-Survey
Efficient-LLMs-Survey
The survey systematically reviews efficiency challenges and solutions for LLMs, offering a clear taxonomy in model-centric, data-centric, and system domains. Recognizing the computational demands of LLMs, it underscores the importance of techniques like model compression, quantization, parameter pruning, and efficient tuning. This resourceful overview aims to aid researchers and practitioners in advancing LLM efficiency without overstating or using subjective descriptions.
Logo of Awesome-Efficient-LLM
Awesome-Efficient-LLM
Discover a curated list of cutting-edge research papers on improving the efficiency of Large Language Models (LLMs) through methods such as network pruning, knowledge distillation, and quantization. This resource provides insights into accelerating inference, optimizing architectures, and enhancing hardware performance, offering valuable information for both academic and industry professionals.
Logo of LLM-FineTuning-Large-Language-Models
LLM-FineTuning-Large-Language-Models
Discover detailed methodologies and practical techniques for fine-tuning large language models (LLMs), supported by comprehensive notebooks and video guides. This resource covers techniques such as 4-bit quantization, direct preference optimization, and custom dataset tuning for models like LLaMA, Mistral, and Mixtral. It also demonstrates the integration of tools like LangChain and the use of APIs, alongside advanced concepts including RoPE embeddings and validation log perplexity, providing diverse applications for AI project enhancement.
Logo of OmniQuant
OmniQuant
OmniQuant presents a comprehensive quantization technique specifically designed for large language models. This approach performs effectively in both weight-only and weight-activation quantization, ensuring accurate results with configurations such as W4A16, W3A16, among others. Users can utilize pre-trained models like LLaMA and Falcon from the OmniQuant model zoo to generate quantized weights. The release includes quantization algorithms such as PrefixQuant and EfficientQAT, which enhance static activation quantization and optimize time-memory efficiency. OmniQuant’s weight compression technology reduces memory requirements, supporting efficient inference on GPUs and mobile devices, such as executing LLaMa-2-Chat with W3A16g128 quantization. Explore the quantization process with detailed resources and scripts for specific computational settings.
Logo of PaddleSlim
PaddleSlim
PaddleSlim offers a comprehensive library for compressing deep learning models, utilizing techniques like low-bit quantization, knowledge distillation, pruning, and neural architecture search. These methods help to optimize model size and performance on different hardware such as Nvidia GPUs and ARM chips. Key features include automated compression support for ONNX models and analytical tools for refining strategies. PaddleSlim also provides detailed tutorials and documentation for applying these methods in natural language processing and computer vision fields.
Logo of FBGEMM
FBGEMM
FBGEMM is a high-performance library for server-side inference, specializing in low-precision matrix multiplications and convolutions. It supports small batch sizes and uses techniques like row-wise quantization to reduce accuracy loss. The library also addresses bandwidth constraints through fusion opportunities. As a backend for PyTorch quantized operators on x86 hardware, it enhances deep learning inference. Comprehensive documentation is available for building, installation, and development.
Logo of lmdeploy
lmdeploy
LMDeploy improves large language model deployment with efficient inference and quantization, enhancing request throughput by 1.8x using features like persistent batches and tensor parallelism. It supports various model types and specifications, ensuring high compatibility and ease of use, making it suitable for developers targeting advanced multi-model services across different platforms.
Logo of qkeras
qkeras
QKeras enhances Keras by introducing quantized layer replacements, facilitating efficient transition to quantized networks while preserving Keras’s core strengths of modularity and user-friendliness. It aids in designing low-latency models for edge devices and offers tools to estimate energy consumption. Explore how QKeras streamlines model quantization.
Logo of nncf
nncf
NNCF provides a suite of algorithms for optimizing neural network inference, supporting PyTorch, TensorFlow, ONNX, and OpenVINO. Key features include quantization, sparsity, and pruning, aimed at efficient model optimization with minimal accuracy loss.
Logo of neural-compressor
neural-compressor
Intel Neural Compressor offers model compression techniques including quantization, pruning, and distillation for frameworks like TensorFlow and PyTorch. It supports various Intel hardware and other platforms via ONNX Runtime. The library facilitates validating LLMs, cloud integrations, and optimizing model performance. Recent updates improve performance and integrate user-friendly APIs to enhance efficiency.
Logo of aphrodite-engine
aphrodite-engine
Aphrodite Engine powers PygmalionAI by providing efficient model inference and Hugging Face model compatibility. It utilizes vLLM's Paged Attention for speedy delivery and supports continuous batching, K/V management, and CUDA kernel optimization. The updated v0.6.1 offers FP16 model support and multiple quant formats, enhancing throughput and memory efficiency. Easy deployment is possible via Docker, with API compatibility for OpenAI environments, facilitating scalable model performance. Review the comprehensive documentation for deployment and optimization tips.
Logo of stable-diffusion.cpp
stable-diffusion.cpp
Discover a minimalist C/C++ system for Stable Diffusion and Flux inference, seamlessly integrating with tools like ggml and supporting a wide range of versions including SD1.x, SD2.x, and SDXL. Inspired by llama.cpp, the project enhances memory efficiency and accelerates CPU and GPU performance via CUDA, Metal, Vulkan, and SYCL. It offers comprehensive support for diverse weights, easy quantization, and intuitive sampling methods, presenting a versatile and optimized solution for developers. With compatibility across Linux, Mac OS, Windows, and Android, this project ensures broad accessibility and integration options.
Logo of whisper.cpp
whisper.cpp
The Whisper.cpp project delivers efficient ASR model inference through a C/C++ implementation, enhancing compatibility across Apple Silicon, x86, and more. It features mixed precision and quantization, supports Mac OS, Windows, iOS, and WebAssembly, and provides CPU and GPU utilization. Noteworthy for its optimization on Apple Silicon, it allows offline use on mobile devices and in-browser operation, ensuring efficient and portable ASR application development.
Logo of Awesome-Deep-Neural-Network-Compression
Awesome-Deep-Neural-Network-Compression
Discover an extensive array of papers, summaries, and codes concerning deep neural network compression methods such as quantization, pruning, and distillation. This resource explores network architecture search, adversarial robustness, NLP compression, and efficient model design, providing access to tools like DeepSpeed, ColossalAI, and PocketFlow, along with comprehensive summaries that connect theory with practical applications in model optimization.
Logo of low-bit-optimizers
low-bit-optimizers
Explore memory-efficient neural network training with 4-bit optimizers, reducing state bitwidth from 32-bit to 4-bit without sacrificing accuracy in tasks such as natural language processing and image classification. This solution supports major optimizers like AdamW and SGD, offering seamless integration and customizable quantization settings.