Back to search
This project uses deep learning to recognize emotions from speech. Based on EMO-DB and RAVDESS datasets, it extracts audio features (MFCCs) to classify emotions like anger, joy, and sadness, enabling applications in voice assistants and mental health.
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
2
commits