This document describes an emotion-based music player that generates playlists based on a user's detected mood. It uses three main modules: an emotion extraction module that analyzes facial expressions from webcam images to determine mood, an audio feature extraction module that extracts data from songs, and an emotion-audio recognition module that maps the facial and audio features to select songs for the playlist. The system aims to reduce the effort of manually creating playlists by automatically generating ones tailored to the user's current emotional state. It works by classifying facial expressions and songs into categories like happy, sad, and angry to create playlists that match or influence the user's detected mood.