Found 112 repositories(showing 30)
DevendraPratapYadav
A modular pipeline to extract several facial features from videos such as face landmarks, eye gaze direction, head pose and Action Units
faithoflifedev
Integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.
shreyamalogi
Real-time face recognition system using HOG encodings and Dlib landmarks. Features a high-speed Flask/OpenCV pipeline for live video processing and automated SQL database logging
renatocastro33
This repository contains the jupyter and mathematica notebooks of the Transfer Learning Model on VGG16, ResNet 50 architecture with ImageNet weights and FaceNet, and also our code proposed of the technique anti spoofing in the paper "Face Liveness Detection Based on Perceptual Image Quality Assessment Features with Multi-scale Analysis". Our project is about a Transfer Learning CNN on VGG16 architecture because it has the best results with an accuracy of 93,016 % in CASIA dataset, 97,321 % MDP dataset and 91,142 % NUAA dataset, we use a "real-time" landmark detection however the transfer learning model and landmark detection don't work together yet (our future work is that these two technologies can work together to reduce errors and improve the accuracy). This also contains the full paper of the "Anti Spoofing Face Detection Technique based on Transfer Learning Convolutional Neural Networks and Real-Time Facial Landmark Detection" to the Latinx in AI - ICML Workshop call.
bugkingK
Google Cloud Vision API allows developers to easily integrate vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content.
Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D) is used for tracking the facial features and predicting facial expression, whereas Depth (3D) data is used to build a series of expressions on a 3D face models. An issue with modern research approaches is the use of a single data stream that provides little indication of the 3D facial structure. We compare and analyse the performance of Convolutional Neural Networks (CNN) using visual, Depth and merged data to identify facial features in real-time using a Depth sensor. First, we review the facial landmarking algorithms and its datasets for Depth data. We address the limitation of the current datasets by introducing the Kinect One Expression Dataset (KOED). Then, we propose the use of CNNs for the single data stream and merged data streams for facial landmark detection. We contribute to existing work by performing a full evaluation on which streams are the most effective for the field of facial landmarking. Furthermore, we improve upon the existing work by extending neural networks to predict into 3D landmarks in real-time with additional observations on the impact of using 2D landmarks as auxiliary information. We evaluate the performance by using Mean Square Error (MSE) and Mean Average Error (MAE). We observe that the single data stream predicts accurate facial landmarks on Depth data when auxiliary information is used to train the network.
kanugoyal
Fun face filters with Python OpencCV. It uses Haar Casade Classifier, Dlib Face Detection Landmarks and the Mediapipe features to detect mainly faces positions, eye positions, and nose positions and use hands as a virtual mouse. It uses this information to overlay different accessories on the faces.
darshilparmar
A facial landmarks detection system is a technology capable of detecting a person from a digital image or a video frame from a video source. There are multiples methods in which facial detection systems work, but in general, they work by comparing selected facial features from given image with faces within a database.
samardhiman007
This Repo contains Facial-Landmark-Detection using opencv ,numpy and dlib . In this Project, I have implemented Facial landmarks(key points) detection system using Convolution Neural network and image processing techniques. Facial landmark detection is regression kind of task where output is a set of values representing positions in the image . This project uses opencv , numpy and dlib libraries. I had used pre-trained model for the landmarks detection. For any task of processing facial features on real-time images, first step will be detecing the faces in the image. Face detection task is achieved by using dlib's implementation. Once faces are detected, we will feed them to the trained model to predict the landmarks.
swiftmg0d
FaceSwap is a Python-based application for swapping faces in images using deep learning techniques. It features face detection, landmarking, and seamless face blending with OpenCV and Dlib. The app supports both sequential and multithreaded processing for optimized performance and includes a Tkinter-based UI. Ideal for image editing and fun project
zenUnicorn
Face landmark recognition using PyTorch is used in computer vision and image processing to locate specific facial features, such as the eyes, nose, mouth, and jawline. This technique can be used for a variety of applications, including facial recognition, emotion detection, and head pose estimation.
sonusuman202
The steps of the method are as follows. In the first phase, the method takes the input image and it checks for face region in the image. If the face is detected, then it applies image processing techniques to extract features and provides it to next step for training the neural network. Facial features extraction is a process of locating specific regions, point, landmarks or curves in 2D or 3D image. This is actually done with the help of OpenCV library which consists haar cascade classifier and pre-trained facial landmark predictor, haar features helps to identify the features such as lips, eyes, eyebrows, nose etc. CNN architectures are used for facial expression recognition, the input images considered are of size 48x48 pixels. Architectures are composed of convolution layers, pooling layers, and fully connected layers. After each convolutional layer and fully connected layer (except the output layer), the activation function is applied. The output layer consists of 7 neurons corresponding to 7 emotional labels: angry, disgust, fear, happy, sad, surprise and neutral. We have used two different datasets for different stages. This model is first fitted to a training set which is made from collecting various examples that are used to match the limitations of the model. The fitted model will normally use to predict the responses of the second dataset i.e., validation dataset observations. The validation dataset offers an assessment of a model that is fit for the training dataset. For regularization we may use early stopping of validation of datasets. This dataset comprises 35,887 face crops with 28,821 training and 7066 validation photos. Images are resolutions of 48x48 pixels, and grayscale. This dataset 's human accuracy was about 70%.
mehrnooshnamdar
Computer vision project for extracting numerical facial features from images using MediaPipe Face Mesh.
heryvandoro
Implementation of several Google Vision features like OCR, Landmark Detection, Face Detection, etc
zachlagden
A lightweight Flask API for face detection and facial landmark extraction with interactive web UI. Process images, extract facial features, and visualize results through a simple REST interface.
archanabmachhoya
In the field of Computer vision, the problem of traditional attendance systems Incorporates using Face recognition which includes features like employees In Time, Actual Time, and Late Hours with Dates, which are implemented using Open CV and Face Recognition libraries. Algorithms such as Histogram of Oriented Gradients (HOG) and Face landmark Estimation for face detection and deep convolutional neural network are used for face recognition.
aquib-sh
A Python-based computer vision tool that analyzes facial features from photographs to determine face shape and provide personalized eyewear recommendations. Using OpenCV and dlib, this tool performs facial landmark detection to calculate key facial measurements and ratios, helping users understand their face shape characteristics.
Varad2804
This project uses OpenCV and MediaPipe to detect facial landmarks and estimate head pose in real-time. The extracted data is processed by a neural network model to determine if a person is attentive. Features include real-time face mesh detection, pose estimation, and attention analysis.
KunalDarwesh
Safety of human being is the major concern in vehicle automation. Statistics shows that 20% of all the traffic accidents are due to diminished vigilance level of driver and hence use of technology in detecting Somnolence and alerting driver is of prime importance. Method for detection of drowsiness based on multidimensional facial features like eyelid movements and yawning is proposed. The geometrical features of mouth and eyelid movement are processed, in parallel to detect drowsiness. Harr classifiers and Shape predictor and face landmarks are used to detect eyes and mouth region. Only the position of lower lip is selected to check for drowsiness as during yawn only lower lip is moved due to downward movement of lower jaw and position of the upper lip is fixed. Processing is done only on one of the eye to analyze attributes of eyelid movement in drowsiness, thus increasing the speed and reducing the false detection. Experimental results show that the algorithm can achieve a 80% performance for drowsiness detection under varying lighting conditions.
krishnaperumalla
This project swaps faces by detecting facial landmarks, aligning key features, and blending the swapped face seamlessly onto the target image.
yoavTzipori
This code captures a video stream, detects faces in the stream, extracts facial landmarks using face mesh, blurs the detected faces, and saves the facial features of a person in a JSON file.
SuryaDataSci
"Developed a project for face and hand landmark detection using Convolutional Neural Networks (CNNs) to accurately identify and track key facial and hand features.
kevinlukaixing
A hands-free gestural interface and AR face-tracking engine built with Handsfree.js and HTML5 Canvas. Features real-time landmark detection and vector-based gesture recognition.
This project includes simple codes for extracting and displaying the landmark features of the face and hands at high speed and real-time with the MediaPipe package in Python.
shojaim
This project is a Face Manipulation Tool that allows users to resize and shift various facial features (like the chin) using mouse interactions. The tool utilizes facial landmark detection to identify key points on the face and then applies transformations based on user inputs.
LohiyaH
DrowsinessGuard: A real-time drowsiness detection system that enhances driver safety using computer vision and facial landmark analysis. Features precise eye tracking, face direction monitoring, and yawn detection to alert drivers of fatigue, helping prevent accidents caused by drowsy driving.
akkut47-lab
Key Features & Solution Highlights > Fatigue detection - using face landmarks (TensorFlow.js) > Voice assistant - with multilingual support (Hindi, Tamil, French_, etc.) > Crash detection + SOs-WhatsApp + Auto emergency call (112) > Women Safety voice command - location sharing, emergency response > Multilingual Voice Assistant > Hazard Reporting
Prince-morya
Real-Time Facial Emotion RecThis project demonstrates real-time facial emotion detection using Python, OpenCV, and Media Pipe’s Face Mesh. It captures live video from your webcam, detects facial landmarks, and infers the user’s emotion based on geometric features displaying both an animated avatar and an emoji label in real time
DanielDdungu
Real-time face recognition and classification system is an important security system of Image processing owing to its use in many institution security fields such as airports, Offices, University, ATM, Bank and in many locations with a security system [1] . Real-time refers to the actual time during which a process takes place or an event occurs. Face recognition is the automated searching of a facial image in a computer database, typically resulting in a group of facial images ranked by computer-evaluated similarity [2] . This recognition is done using biometric technology and forensics. Face recognition system should be able to automatically detect a face in an image. This involves extraction of its features and then recognizing it, regardless of lighting, expression, illumination, aging, transformations (translate, rotate and scale image) and pose. Feature extraction includes extracting landmarks on the face and analyzing the features using the algorithm to obtain their relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw.
MahmoudElsayedMahmoud
- In this project we will draw the 3 position axis (pitch,yaw,roll) by predicting the 3 angels of each position by training 3 models to predict each angel. - We will use [AFLW2000](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/Database/AFLW2000-3D.zip) dataset with contains 2000 image and 2000 matlab file with contains the 3 labels (angels). - We will use MediaPipe library in both training and testing phases: - In Training: first we dtect the face of each image then using the same library to generate the landmark points of the face after this phase the training data (features) will contain 1853 samples with 936 columns (468 for X and 468 for Y), for labels we will extract the 3 angels from the mat file. - In Testing: we will use the MediaPipe Library to generate the landmarks as we did in the training phase and using the trained models to predict the 3 labels and using them to draw the axis.