Found 4 repositories(showing 4)
kentaroy47
Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
somya2703
TensorRT inference optimization pipeline for ResNet50 — benchmarks ONNX Runtime against FP32, FP16, and INT8 TRT engines with a FastAPI serving layer.
Benguerine
Complete PyTorch to TensorRT quantization workflow with FP32/FP16/INT8 optimization, performance benchmarking, and model visualization. Achieve 4x speedup with minimal accuracy loss.
mertcanustun
Comprehensive toolkit for training YOLO models on UAV/drone datasets and optimizing them with TensorRT. Features automated dataset preparation, YOLOv11 training, multi-precision TensorRT conversion (FP32/FP16/INT8), and performance benchmarking with visualization tools.
All 4 repositories loaded