Project Icon

ppl.nn

Enhanced AI Model Deployment with Advanced LLM Capabilities

Product DescriptionPPLNN serves as an optimal deep learning inference engine for efficient AI processing, supporting diverse ONNX models and integration with OpenMMLab. It features advanced LLM options such as Flash and Split-k Attention, Dynamic Batching, Tensor Parallelism, and INT8 Quantization. Recent transitions from PMX to OPMX enhance its functionality, with solutions provided for NCCL issues on select devices, securing enhanced model accuracy and performance.
Project Details