Found 1,812 repositories(showing 30)
kevinjosethomas
✌️ An ASL fingerspell recognition and semantic pose retrieval interface (arXiv, GitHub, YouTube)
snrao310
Online recognition of American Sign Language Finger Spelling from a video (webcam) and interpretation of the gestures.
cortictechnology
ASL Recognition using Hand Landmarks
Juniar-Rakhman
http://www.youtube.com/watch?v=bUrjFGMfwas
simonefinelli
Recognize the American Sign Language in a video stream and translate it to American word by word.
My undergraduate Final Year Project awarded as the Excellent Bachelor's Project. It develops a vision-based sign language recognition system with multiple machine-learning models, which currently can recognize 10 static and 2 dynamic gesutures in ASL with testing accuracy of 99.68%.
rklymentiev
ASL gesture recognition from the webcam using OpenCV & CNN
Build a convolutional neural network to classify images of letters from American Sign Language
chevalierNoir
ASL Fingerspelling recognition in the wild
An Android Application that uses gesture recognition to understand alphabets of american sign language
DEV-D-GR8
This repository contains a transformer-based model for real-time American Sign Language (ASL) recognition. The model leverages transformer architecture to interpret ASL gestures and utilizes the Gemini-Pro LLM API for constructing sentences from recognized ASL signs.
aqua1907
ASL language recognition using pre-trained MediaPipe models
chinmoyacharjee
ASL alphabet and digits recognition from human gestures and gesture controlled calculator Using CNN-Keras-tensorflow.
Estaheri7
A real-time American Sign Language (ASL) recognition system
quangkhai5122
This is a PyTorch implementation for isolated ASL word recognition on Kaggle's GISLR dataset, including an inference demo that turns a stream of recognized words into a simple sentence via Google Generative AI (Gemini).
RhythmusByte
Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.
raj99-code
Portable sign language (ASL) recognition device that utilizes real-time and efficient programming to help mute and deaf by establishing two-way communication channel with people who have never studied sign language.
abhinuvpitale
Fingerspelling and word prediciton using american sign language gestures
devAmjad4590
The ASL Hand Gesture Recognition using MediaPipe and CNN project is designed to recognize American Sign Language (ASL) gestures. It involves creating a dataset of hand gestures, preprocessing the images, training a Convolutional Neural Network (CNN), and detecting hand signs in real time.
ezgigungor
Sign language recognition using MS-ASL dataset.
Deaf and mute people use sign language to communicate. Unlike acoustically conveyed sound patterns, sign language uses hand gestures, facial expressions, body language and manual communication to convey thoughts. Due to the considerable time required in learning Sign Language, people find it difficult to communicate with specially-abled people, creating a communication gap. Hence conventionally, people face problems in recognizing sign language. Moreover, different countries have their respective form of sign gesture communication which results in non-uniformity. The Indian Sign Language used in India is largely different from the American Sign Language used in the US, mostly because of the difference in culture, geographical and historical context. Somewhere between 138 and 300 different types of sign language are currently being used throughout the world. Sign language structure varies spatially and temporally. We have identified these as a major barrier in communicating with a significant part of society. And hence, we propose to design a system that recognizes different signs and conveys the information to people. The component of any sign language consists of hand shape, motion and place of articulation. When combined, these three components (together with palm orientation) uniquely determine the meaning of the manual sign. For sign language identification, sensor-based and vision-based methods are used. In vision-based gesture recognition technology, a camera reads the movements of the human body, typically hand movements and uses these gestures to interpret sign language; whereas in sensor-based methods, real-time hand and finger movements can be monitored using the leap motion sensor. We aim at developing a scalable project where we will be considering different hand gestures to recognize the letters and words. We plan to use different deep learning models to predict the sign. This may be developed as a desktop or mobile application to enable specially-abled people to communicate easily and effectively with others. However, this project can later be extended to capture the whole vocabulary of ASL (American Sign Language) through manual and non-manual signs.
ishanshrivastava2011
This project contains the code for various feature extraction methods used to extract feature from 4 kinds of Sensors(Accelerometer, Gyroscope, Orientation and EMG). Feature Extraction methods used were Discrete Fourier Transform, Discrete Wavelet Transform, Discrete Cosine Transform, Power Spectral Density and Piece Wise Aggregation. It also contains the code to visualize the extracted features as Grouped Box Plots for a "Gesture" Vs "Not of Gesture" which gives an interesting way to find out important features. PCA is also implemented and similarly visualized to find and understand the meaning of each Principal Component.
s-almeda
ASL Recognition Program using the LeapMotion Controller
KaidAkram
A real-time hand gesture recognition app that translates ASL gestures into spoken words
shivas1432
Real-time ASL gesture recognition web app that converts sign language to text and speech. Built with React, TypeScript, MediaPipe, and TensorFlow.js for accessible communication.
MelihGulum
American Sign Language Alphabet recognition with Deep Learning's CNN architecture
Aminos7
The goal behind this model is to ensure a successful communication between dumb/ deaf people and the rest of people especially on public places such as hospitals, banks, restaurants and so on.
Muhib-Mehdi
The ASL Recognition System is a real‑time American Sign Language (ASL) gesture‑recognition application built with Python, TensorFlow Lite, OpenCV, and MediaPipe. It captures hand landmarks from a webcam, processes them through a lightweight neural network, and instantly translates the gestures into alphabet letters (A‑Z).
PierceShahi
Major Project
Sreevarshini-140
Real-time ASL recognition using CNN, MediaPipe and OpenCV