Found 44 repositories(showing 30)
openvinotoolkit
Pre-trained Deep Learning models and demos (high quality and extremely fast)
dbiir
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
ZhuiyiTechnology
Open Language Pre-trained Model Zoo
thunlp
Open Chinese Language Pre-trained Model Zoo
luxonis
DepthAI Model Zoo is a collection of open-source neural network models and datasets created and maintained by DepthAI developers and community
Python wrapper library for Intel distribution of OpenVINO and Open Model Zoo (OMZ) models. User can use DL by simple function calls with this library.
rziga
A model zoo for Grounding-DINO-based open-world detection models.
dlstreamer
Repository to store INT8 quantized models derived from open model zoo
smart0303
No description available
silkroading
No description available
openmv
Please do not feed the models.
KMKnation
It controls the computer's mouse pointer with eye gaze. We have used 4 pre-trained model that is provided by Open Model Zoo. The project's main aim is to check the usage of OpenVino ToolKit on different hardware which includes openvino inference API, OpenVino WorkBench, and VTune Profiler.
davisclick
OpenVino eye-gaze-estimation to control computer mouse. Four pre-trained model provided by Open Model Zoo, are used.
Computer Pointer Controller application used to control mouse pointer movement by eye gaze and head pose of human being. The project's main aim is to deploy multiple model altogether using OpenVino ToolKit. I have used 4 pre-trained model provided by Open Model Zoo. This app is an integration of face detection model, head-pose estimation model, facial landmarks model and gaze estimation deep learning model.
mandiladitya
Steps 1. Install TensorFlow-GPU 1.5 (skip this step if TensorFlow-GPU 1.5 is already installed) Install TensorFlow-GPU by following the instructions in this YouTube Video by Mark Jay. The video is made for TensorFlow-GPU v1.4, but the “pip install --upgrade tensorflow-gpu” command will automatically download version 1.5. Download and install CUDA v9.0 and cuDNN v7.0 (rather than CUDA v8.0 and cuDNN v6.0 as instructed in the video), because they are supported by TensorFlow-GPU v1.5. As future versions of TensorFlow are released, you will likely need to continue updating the CUDA and cuDNN versions to the latest supported version. Be sure to install Anaconda with Python 3.6 as instructed in the video, as the Anaconda virtual environment will be used for the rest of this tutorial. Visit TensorFlow's website for further installation details, including how to install it on other operating systems (like Linux). The object detection repository itself also has installation instructions. 2. Set up TensorFlow Directory and Anaconda Virtual Environment The TensorFlow Object Detection API requires using the specific directory structure provided in its GitHub repository. It also requires several additional Python packages, specific additions to the PATH and PYTHONPATH variables, and a few extra setup commands to get everything set up to run or train an object detection model. This portion of the tutorial goes over the full set up required. It is fairly meticulous, but follow the instructions closely, because improper setup can cause unwieldy errors down the road. 2a. Download TensorFlow Object Detection API repository from GitHub Create a folder directly in C: and name it “tensorflow1”. This working directory will contain the full TensorFlow object detection framework, as well as your training images, training data, trained classifier, configuration files, and everything else needed for the object detection classifier. Download the full TensorFlow object detection repository located at https://github.com/tensorflow/models by clicking the “Clone or Download” button and downloading the zip file. Open the downloaded zip file and extract the “models-master” folder directly into the C:\tensorflow1 directory you just created. Rename “models-master” to just “models”. (Note, this tutorial was done using this GitHub commit of the TensorFlow Object Detection API. If portions of this tutorial do not work, it may be necessary to download and use this exact commit rather than the most up-to-date version.) 2b. Download the Faster-RCNN-Inception-V2-COCO model from TensorFlow's model zoo TensorFlow provides several object detection models (pre-trained classifiers with specific neural network architectures) in its model zoo. Some models (such as the SSD-MobileNet model) have an architecture that allows for faster detection but with less accuracy, while some models (such as the Faster-RCNN model) give slower detection but with more accuracy. I initially started with the SSD-MobileNet-V1 model, but it didn’t do a very good job identifying the cards in my images. I re-trained my detector on the Faster-RCNN-Inception-V2 model, and the detection worked considerably better, but with a noticeably slower speed.
ZhuiyiAI
Open Language Pre-trained Model Zoo
kimtth
🧑🤝🧑 Incomplete project. This project is pair with "open-vn-person-count-ui" repository. By Intel inference engine called "Open-Vino". / CSRNet, and People counting model from Intel model zoo.
oatmeelsquares
Project materials for DS 6015: DS Capstone. The goal of this project is to evaluate a model from Intel's Open Model Zoo for potential bias against protected characteristic(s).
miosipof
An open-source library for hardware-aware model optimization. It learns soft gates to prune and fine-tune neural networks for specific GPUs, balancing latency and accuracy. Includes tools for latency-driven training, export with kernel-aligned sizes, and a Hugging Face model-zoo integration.
Phance
open_model_zoo
Phance
open_model_zoo
swe-train
No description available
Ashok-Rawat-Code
No description available
liyong4x
No description available
clearlinux-pkgs
No description available
ohjho
99% vibe coded streamlit app to offer some data visualization on the various OpenRouter models
openkylin
No description available
abelleeye
No description available
tidy-neuralnetwork
No description available
jellyware
No description available