optimum-intel
Discover how Intel's tools enhance AI model performance through integration with Hugging Face libraries. Supports Intel Extension for PyTorch for improved efficiency, Intel Neural Compressor for model compression, and OpenVINO for robust inference. Apply quantization and pruning techniques to optimize workflows on Intel hardware, with dynamic installation options and comprehensive usage examples.