Project Icon

YOLOv8-TensorRT

Leverage TensorRT for Enhanced Inference Speeds in YOLOv8 Deployments

Product DescriptionYOLOv8-TensorRT boosts YOLOv8 performance by employing TensorRT for faster inference. It leverages CUDA and C++ for engine construction and facilitates ONNX model export with NMS integration. This project provides flexible deployment options using Python and Trtexec on various platforms, including Jetson. The comprehensive setup guide helps adapt to different AI deployment needs, offering an efficient PyTorch alternative.
Project Details