Deep Neural Networks based Music System using Facial Emotion
Main Article Content
Abstract
In recent days, Facial emotion Recognition has become an interesting sector and it has fascinated many developers and other common people for its own prominent features. Even though it has many difficulties in recognizing the emotions of human individuals due to the diversity of emotions and ambiguity. Our paper discusses a prominent model developed to overcome the drawbacks of conventional Facial Emotion recognition using Deep Learning. The model has been employed with Convolution Neural Networks to recognize spatial hierarchies and patterns which leads to extracting the features of the face from the image being taken in real-time and subsequently, it categorizes the emotion. Pre-trained CNN models such as VGG, ResNet, and MobileNet are utilized to enhance the performance of the model. The dataset used in the development of the model contains 35,000+ images for training and testing purposes which categorize human emotion into seven types such as anger, disgust, fear, happy, neutral, surprise, and sad. Here, the color image is converted into a grayscale image followed by data augmentation is used to improve the dataset for a better accuracy rate. Overall, the model distinguishes the human emotion detected and emphasizes multiple applications. This paper not only discusses Facial Emotion Recognition but also incorporates with Multimedia system for a real-time application. It established the Automatic music player handled by Facial Emotion recognition using deep learning. The music player has a list of music assigned for each emotion and it dynamically plays the music based on the user's emotional state. Collectively, the fusion of Facial Emotion Recognition with a music player offers a prominent solution for future breakthroughs in technology.