Found 13 repositories(showing 13)
Shivadharshini-V
This project implements a fingerprint recognition system using ORB for feature extraction and SVM for classification in Python. It preprocesses fingerprint images, trains a machine learning model, and predicts the person ID from new inputs. The system is designed for beginners to learn biometric authentication and image processing.
pratishtha-agarwal
It performs Facial recognition with high accuracy. This attendance project uses webcam to detect faces and records the attendance live in an excel sheet. In order to determine the distinctive aspects of the faces based on distance, convolutional neural networks are used. All you need to do is stand in front of the camera and your face is verified instantly in milliseconds, without recording the attendance more than once. Facial recognition systems are commonly used for verification and security purposes but the levels of accuracy are still being improved. Errors occurring in facial feature detection due to occlusions, pose and illumination changes can be compensated by the use of hog descriptors. The most reliable way to measure a face is by employing deep learning techniques. The final step is to train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. A python based application is being developed to recognize faces in all conditions. We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
abdullah1772
No description available
This repository implements a License Plate Recognition system in Python using image processing and machine learning techniques. It focuses on detecting and isolating license plate regions from car images with component region analysis and binarizations before segmenting characters and using an SVM model to predict characters.
vinitsolanki-2004
Developed a robust SVM-based face recognition model using Python, scikit-learn, and face _ recognition library, capable of encoding faces and updating the model with new images. Implemented label encoding, handling unseen labels, and model persistence, demonstrating strong skills in machine learning, computer vision, and software development.
Code for SVM as referenced to Youtube video by casual_coding
rakesh-chinta
This is a python code demonstrating "SVM-Image-recognition" using Matplotlib library in machine learning.
mayanksharma-1
This project implements an License Plate Recognition (LPR) system using classical image processing and machine learning techniques in Python. The system detects license plates in car images, segments individual characters, and recognizes them using a trained Support Vector Machine (SVM) classifier.
tyagih038
An attendance system, implemented in python that uses face recognition to mark attendance of a student. It uses haar cascade classifier to detect the images (using open CV) and uses SVM (Support Vector Machine).
AuroraRW
Created Sign Language Recognition using Jupyter in Python. Two models was trained by CNN and SVM. And the result shows the CNN model is better. The demo can recognize the image in realtime. The image is captured by camera using CV2 library.
Neha-Sharma7
Face Recognition system using PCA (Eigenfaces) for feature extraction and SVM for classification. Processes image datasets, reduces dimensionality, and predicts identities efficiently. Built with Python, OpenCV, and Scikit-learn, with implementation in Google Colab.
rakeshcharybangaroj
In this project we are detecting depression from users post, user can upload post in the form of text file, image file or audio file, this project can help peoples who are in depression by sending motivated messages to them. Now-a-days people are using online post services to interact with each other compare to human to human interaction. So by analyzing users post this application can detect depression and send motivation messages to them. Administrator of this application will send motivation messages, links to movies, suggestion of books to boost the mental health, and songs for refreshment to all peoples who are in depression. To detect depression we are using SVM (support vector machine) algorithm which analyses users post and give result as negative or positive. If users express depression words in post then SVM detect it as a negative post else positive post. To implement this project we are using python Speech Recognition API which will read text from audio files and then SVM will analyze that text to detect depression, user can also upload images via post and python Tesseract OCR (Optical Character Recognition) API can read text from uploaded image and then SVM will detect depression from that text, User can upload post in text file also.
sahilraikar
Dependencies: cmake dlib opencv-python face-recognition numpy install visual studio with desktop development with c++ because it is necessary to install dlib library it wont installl without the c++ of visual studio it takes two images and compares them and gives a boolean value and it gives values if values is less than 0.6 then both images are matching Technology We are using Histogram of Oriented Gradients(HOG) method. To find faces in an image, we’ll start by making our image black and white because we don’t need color data to find faces: Then we’ll look at every single pixel in our image one at a time. For every single pixel, we want to look at the pixels that directly surrounding it. Our goal is to figure out how dark the current pixel is compared to the pixels directly surrounding it. Then we want to draw an arrow showing in which direction the image is getting darker The neural network learns to reliably generate 128 measurements for each person. Any ten different pictures of the same person should give roughly the same measurements. Machine learning people call the 128 measurements of each face an embedding. You can do that by using any basic machine learning classification algorithm. No fancy deep learning tricks are needed. We’ll use a simple linear SVM classifier, but lots of classification algorithms could work. All we need to do is train a classifier that can take in the measurements from a new test image and tells which known person is the closest match. Running this classifier takes milliseconds. The result of the classifier is the name of the person
All 13 repositories loaded