Multimodal emotion recognition is a challenging task because emotions can be expressed through various modalities. It can be applied in various fields, for example, human-computer interaction, crime, healthcare, multimedia retrieval, etc. In recent times, neural networks have achieved overwhelming success in determining emotional states. Motivated by these advancements, we present a multimodal emotion recognition system which is based on body language, facial expression and speech. This paper presents the techniques used in the Multimodal Emotion Recognition in Polish challenge. To detect the emotional state for various videos, data preprocessing operations are performed and robust features are extracted. For this purpose, we have used facial landmark detection for facial expressions and MFCC for speech.
Stars
10
Forks
3
Watchers
10
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
7
commits
Updated the file to change dataset and created dataset using numpy array
147166aView on GitHubAdded the Kinect Motion Capture, Facial Expressions and the Descriptors cleaning dataset
ff6b562View on GitHub