Found 25 repositories(showing 25)
SomyanshAvasthi
This repository implements a sign language detection system using MediaPipe from totally scratch. Manually collected and annotated data. Leveraging MediaPipe's hand landmark detection, the system processes video frames to classify and translate sign language gestures in real-time.
Real-Time ASL Static Gesture Recognition using MediaPipe & Lightweight ML Models (SVM/MLP) A high-performance ASL translator that runs fully on CPU using MediaPipe hand landmarks and optimized ML models. Includes dataset collection, training pipeline, real-time inference, visualization, and performance metrics.
rishabhshah13
This project builds a real-time sign language recognition system using deep learning (1DCNNs & Transformers) and MediaPipe hand landmarks. It allows users to fingerspell letters/numbers or express signs for real-time translation. Pre-trained models and scripts for training/inference are included.
salahAbdeldaim
A simple AI-based project for recognizing Arabic sign-language letters using Python, OpenCV, MediaPipe, and scikit-learn. The system extracts 3D hand landmarks and classifies them into letters, offering a lightweight, modular, and extendable foundation for building real-time sign-language translation tools.
AbinReji07
This project is a real-time Sign Language Recognition system using MediaPipe and a trained ML model. It captures hand landmarks via webcam, extracts features, and predicts the sign using a classifier. It enables gesture-based communication and can be extended for translation and accessibility tools.
kaushalyamr
A Python-based project for real-time hand pose estimation and gesture recognition using MediaPipe and OpenCV. Detects key hand landmarks to identify gestures like thumbs up or victory signs, with applications in sign language translation and interactive controls. Ideal for computer vision enthusiasts
Bharani01072007
SilentVoice AI is a real-time sign language recognition system built with Python, TensorFlow, and MediaPipe. The project uses hand landmark detection to recognize gestures and translate them into letters or words, helping bridge communication between signers and non-signers.
SibaniPJ
A machine learning-based sign language alphabet recognition system that uses Mediapipe landmarks and a custom-trained neural network for real-time gesture translation.
kirito1087
A Python-based, real-time American Sign Language recognition system using Mediapipe and machine learning. Extracts hand landmarks from images or webcam, trains a classifier, and predicts plain alphabet letters for accessible sign-to-text translation.
faresfadly1
A real-time sign language translation app using computer vision. It captures webcam video, detects hand landmarks with MediaPipe, and recognizes 9+ gestures (thumbs up, peace sign, OK symbol, etc.). Displays translations, FPS, and gesture history. Press ESC to exit.
ChiragRaisingh
A real-time sign language recognizer using MediaPipe and OpenCV that detects hand landmarks and classifies gestures with a custom-trained model. It translates sign language into readable text to support communication for the hearing and speech-impaired.
susmnty
This project translates sign language gestures into text using a machine learning-based hand tracking system. By leveraging Mediapipe for hand landmark detection and Random Forest Classifier for gesture recognition, the system achieves high accuracy in real-time sign translation.
SaiSamarth59
A Python-based project for real-time hand pose estimation and gesture recognition using MediaPipe and OpenCV. Detects key hand landmarks to identify gestures like thumbs up or victory signs, with applications in sign language translation and interactive controls. Ideal for computer vision enthusiasts!
sharmili0707
Developed an AI-powered real-time American Sign Language (ASL) translator that recognizes hand gestures using MediaPipe's hand landmark detection and classifies them using a deep learning model (CNN). The system captures hand movements via webcam, processes the landmarks, and translates the signs into English text and speech.
This project presents a real-time sign language recognition system that translates hand gestures into readable text using computer vision and deep learning techniques. The system utilizes a webcam to capture live video input and detects hand landmarks using Mediapipe.
(Currently Active) AI-powered system that translates sign language into text and speech in real time | Captures video, extracts hand and upper-body landmarks using MediaPipe | Processes with CNN and LSTM/Transformer models | NLP forms sentences | TTS generates speech
I built a real-time sign-language system using MediaPipe to detect hand landmarks, normalized them, padded features, and trained a Random Forest model with 95% accuracy. It recognizes 36 gestures at 30 FPS and includes multilingual translation with text-to-speech for seamless communication.
betsy-biji
SL Word Interpreter is a real-time system that translates Indian Sign Language digit gestures into numbers using a webcam. It detects hand landmarks with MediaPipe, analyzes finger states using geometric heuristics, and displays and speaks the detected digit with stable, smooth predictions.
faissssss
A real-time, two-handed sign language translator built with Python, MediaPipe, and Scikit-learn. This project captures hand gestures via webcam, extracts landmarks, and translates them into text using a Random Forest classifier. It includes a Flask-based web application for easy usage.
ThomsonCayley
A high-performance Python tool for real-time American Sign Language (ASL) translation. Using MediaPipe for hand landmark tracking, it captures webcam feed to interpret gestures into English text and speech. Features a streamlined UI for seamless communication between ASL users and non-signers.
subhamsje
👋 About SignSpeak is an AI-powered real-time sign language recognition web application that translates hand gestures into text (and optional speech) directly in the browser. Built using MediaPipe hand tracking and TensorFlow-based gesture recognition, the system detects 21 hand landmarks per hand and performs ultra-low latency inference
arAkhil019
Built an ML-powered web app that translates YouTube captions into 3D sign language using Mediapipe Hand Landmark detection model, and Three.js for 3D Animations, boosting accessibility for people. Engineered real-time hand sign animation by mapping ML-generated coordinates to 3D hand models rendered in a floating overlay using Three.js.
Sign language detection uses computer vision and AI to recognize hand gestures, facial expressions, and body poses in real-time, translating them into text or speech to bridge communication gaps. Key Technologies MediaPipe/OpenCV: For tracking hand landmarks and body points. CNNs: Excellent for identifying static hand shapes (alphabets).
ayush-h-pawar
Engineered a real-time platform translating Indian Sign Language gestures into text and speech using LSTM neural networks. Utilized MediaPipe for precise hand-landmark detection with 95%+ accuracy. Built an accessible Flask web app supporting dynamic gestures and Text-to-Speech output for improved inclusivity.
Batuk55
A real-time computer vision and deep learning–based system that translates sign language gestures into readable text and speech. Uses MediaPipe for hand landmark detection and CNN + LSTM models to recognize static and dynamic gestures from live video, enabling accessible and sensor-free communication.
All 25 repositories loaded