Problem Statement
Globally, millions of individuals rely on sign language as their primary mode of communication. However, a significant portion of the hearing population lacks knowledge of sign language, leading to communication barriers that impact everyday interactions, limit access to essential services, and marginalize the Deaf community.
There is a pressing need for affordable, real-time assistive technology that can bridge this communication gap and foster inclusivity in schools, workplaces, healthcare settings, and public spaces.
Idea
Un-Mute is a wearable glove-based system that captures hand and finger gestures used in sign language and translates them in real time into spoken or written text through a mobile or web application. By using a combination of flex sensors, motion tracking, and machine learning models, the system identifies and classifies sign language gestures, enabling seamless communication between signers and non-signers.
Objectives
-
-
To develop a low-cost wearable device that can interpret sign language gestures accurately.
-
To leverage flex sensors and motion detection (MPU6050) to capture detailed hand movements.
-
To design a machine learning pipeline capable of classifying gestures into alphabets, words, or sentences.
-
To build a user-friendly application interface that outputs translated text or audio.
-
To foster inclusivity by enabling better communication between the Deaf and hearing communities.
-
To make the solution open-source and customizable, encouraging further development and adaptation.
-
Final Solution
The Un-Mute system consists of a glove embedded with 5 flex sensors (one per finger) and an MPU6050 accelerometer/gyroscope module to detect hand posture and orientation. All sensor data is collected and transmitted via an ESP32 microcontroller to a connected application.
Hardware Features:
-
Flexible, synthetic glove for comfort and sensor integration.
-
3D-printed mounts for accurate sensor placement and modularity.
-
Real-time data transmission over Wi-Fi/Bluetooth.
Software Pipeline:
-
Arduino IDE: For ESP32 firmware and sensor communication.
-
Python (VS Code): For gesture recognition using Scikit-learn, TensorFlow, and Keras.
-
Flask-based Web App: For live gesture translation and output display.
-
Database (SQLite/MySQL): For storing gestures, user logs, and models.
The system currently recognizes a set of predefined gestures and is scalable to more complex sign language structures. It aims to be affordable, intuitive, and impactful—bringing voice to the unspoken and making communication inclusive and effortless.