Project Icon

TensorRT-YOLO

Boost YOLO Model Performance with NVIDIA TensorRT Optimization

Product Description:The TensorRT-YOLO project supports enhanced inference for YOLOv3 to YOLO11 and PP-YOLOE models through NVIDIA TensorRT optimization. It integrates TensorRT plugins, CUDA kernels, and CUDA Graphs to deliver a fast object detection solution compatible with C++ and Python. Key features include ONNX export, command-line model export, and Docker deployment.
Project Details