Back to search
Developed an emotion recognition system that classifies human emotions (e.g., happy, sad, angry) from speech audio using Mel Frequency Cepstral Coefficients (MFCCs) and a deep learning model in PyTorch. Preprocessed .wav audio data, extracted relevant acoustic features, and trained a fully connected neural network to predict emotional labels
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
2
commits