Back to search
This project uses an LSTM-based neural network to perform speech emotion recognition on the TESS dataset. Audio features are extracted using MFCCs to capture temporal patterns in speech. The model classifies seven emotions and achieves 72% validation accuracy. It has potential applications in virtual assistants and mental health tools.
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
2
commits
Create Speech_Emotion_Recognition_Sound_Classification_(1) (3)
9528018View on GitHub