tiny-tensorrt
Discover a user-friendly NVIDIA TensorRT wrapper for deploying ONNX models in C++ and Python. Despite its lack of ongoing maintenance, tiny-tensorrt emphasizes efficient deployment using minimal coding. Dependencies include CUDA, CUDNN, and TensorRT, easily setup through NVIDIA's Docker. With support for multiple CUDA and TensorRT versions, it integrates smoothly into projects. Documentation and installation guidance are available on its GitHub wiki.