Project Icon

tutel

Enhanced Mixture-of-Experts for Efficient Dynamic Training and Inference

Product DescriptionTutel MoE provides an efficient implementation of Mixture-of-Experts, including 'No-penalty Parallelism' for adaptable training and inference. It is compatible with PyTorch and supports CUDA and ROCm GPUs as well as various CPU formats. Recent updates feature new benchmarks, tensorcore options, and improved communication. Tutel enables seamless configuration changes without additional costs and offers straightforward installation and testing processes. It supports distributed modes across multi-node and multi-GPU setups, making it suitable for developers looking to improve performance and scalability in machine learning frameworks.
Project Details