onnx-tensorrt
Optimize deep learning workflows with the TensorRT backend designed for ONNX model execution. This project aligns with TensorRT 10.5, ensuring full-dimension and dynamic shape processing. It integrates seamlessly with C++ and Python tools such as trtexec and polygraphy, enhancing model parsing efficiency. Comprehensive documentation, including FAQs and changelogs, aids in adaptive CUDA environment setups, making it a robust choice for ONNX deployment across experience levels.