Found 362 repositories(showing 30)
dusty-nv
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
dusty-nv
Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
ceccocats
Deep neural network library and toolkit to do high performace inference on NVIDIA jetson platforms
dusty-nv
ASR/NLP/TTS deep learning inference library for NVIDIA Jetson using PyTorch and TensorRT
NVIDIA-ISAAC-ROS
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Seeed-Studio
An application suite including an open-source inference server and web UI to deploy any YOLOv8 model to NVIDIA Jetson devices and visualize captured streams, with one line of code.
kentaroy47
Benchmark inference speed of CNNs with various quantization methods in Pytorch+TensorRT with Jetson Nano/Xavier
otamajakusi
Dockerfile for yolov5 inference on NVIDIA Jetson
BouweCeunen
Object detection with SSD MobileNet v2 COCO model optimized with TensorRT on NVIDIA Jetson Nano built upon Jetson Inference of dusty-nv (https://github.com/dusty-nv/jetson-inference).
cap-lab
Jetson embedded platform-target deep learning inference acceleration framework with TensorRT
Nyan-SouthKorea
Use openvoice v2 module to do real time tts(text to speech) task for on-device robotics. Trying to inference the model on single board like raspberry pi, jetson boards.
RichardoMrMu
A Python training and inference implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano
Jen-Hung-Ho
Jetbot tools is a set of ROS2 nodes that utilize the Jetson inference DNN vision library for NVIDIA Jetson
teavuihuang
The Jetson Emulator emulates the NVIDIA Jetson AI-Computer's Inference and Utilities API for image classification, object detection and image segmentation (i.e. imageNet, detectNet and segNet). The intended users are makers, learners, developers and students who are curious about AI computers and AI edge-computing but are not ready or able to invest in an actual device such as the NVIDIA Jetson Nano Developer Kit (https://developer.nvidia.com/embedded/jetson-nano-developer-kit). E.g. this allows every student in a computer class to have their own personal AI computer to explore and experiment with. This Jetson Emulator presents a pre-configured, ready-to-run kit with 2 virtual HDMI displays and 4 virtual live-cameras. This enables usage familiarisation with the Jetson API and experimentation with AI computer vision inference. It is a great way to quickly and easily get 'hands-on' with Jetson and experience the power of AI.
HouYanSong
This project introduces how to implement high-performance deployment of YOLOv8-SAHI with Int8 Engine on embedded devices such as Jetson. The time consumption for testing image slice and batch inference on Jetson Orin Nano (8GB) is only 0.04 seconds, and the 1080p video inference with sahi and bytetrack achieves nearly 15 FPS.
csvance
ROS TensorRT Inference Nodes for DIGITS on the Jetson
imsanjoykb
CUDA Programming Practices
collincebecky
PORTING JETSON INFERENCING TO X86 64 ,IP CAMERA ,QT
roboflow
Object detection inference with Roboflow Train models on NVIDIA Jetson devices.
dgcnz
Training, optimization and deployment of Object Detection model with dinov2 backbone for efficient inference on NVIDIA Jetson
rockkingjy
Batch inference version of Jetson-inference, to run several images recognition on TX1/2 and PC at the same time to save time
WhoseAI
Use 600 pcs of Masked and No_Masked people, trained and inferenced on Jetson Nano with Yolov3-Tiny
surajiitd
This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)
SidaWang12
Build and install Paddle Inference GPU 3. from source on NVIDIA Jetson (JetPack 6.x, CUDA 12), with TensorRT support and known issue fixes.
dusty-nv
Jetson AI Lab - LLM Inference
dlbuilder
How to inference yolov3 tiny on jetson nano with tensorrt and jetson multimedia api
thanhlnbka
Guide to deploying YOLOv10 on NVIDIA Triton Inference Server for Jetson devices with JetPack 5.1.3. Covers exporting YOLOv10 from PyTorch to ONNX, converting to TensorRT, and setting up Triton. Includes client setup for real-time inference. Start by cloning the repo and following the provided steps.
tonylt
This project is a deep learning medical xray-image inference demo based on nvidia jetson tx2' s jetson-inference.
robertying
Modified and customized version of "Jetson Nano: Deep Learning Inference Benchmarks Instructions"
TensorRT inference (Python and C++) for Chinese single-line and double-line license plate detection and recognition. Optimized for Jetson Nano. 车牌检测,车牌识别,支持单层和双层车牌识别,有 TensorRT Python 和 C++ 的demo, 适合 Jetson Nano 运行。