Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Dissanayaka, D.M.M.T."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    Item
    An Emotion-Aware Music Playlist Generator for Music Therapy
    (Faculty of Computing and Technology, University of Kelaniya, Sri Lanka, 2016) Dissanayaka, D.M.M.T.; Liyanage, S.R.
    Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life. Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries. Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player 200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%. This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.
  • Thumbnail Image
    Item
    Real Time Emotion Based Music Player for Android
    (Faculty of Graduate Studies, University of Kelaniya, 2015) Dissanayaka, D.M.M.T.; Liyanage, S.R.
    Listening to music has been found to affect the human brain activities. Emotion based music players with automated playlists can help users to maintain a selected emotional state. This research proposes an emotion based music player that create playlists based on real time photos of the user. Two emotional statuses, happy and not-happy were considered in this study. User‘s images were captured in real-time using an android device camera. Grey scaled images were used to compress the image files. Eye and lip areas were cropped and sent to the MATLAB backend via client server-socket connections. Gaussian filtering was applied to reduce noise. Canny Edge Detection algorithm was used for edge detection. Eigen face-based pattern recognition was used for emotion recognition. PCA eigenvectors were learnt from the dataset via unsupervised training to learn the Eigen face models. The dissimilarity between pairs of face images projected to the Eigen space were measured using the Euclidean distance. The matched image was the one with the lowest dissimilarity. The identified label, happy/not-happy was transmitted back to the Android music player via a client server socket connection. Songs that are pre-categorised as happy/ not-happy are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player. 120 face images were collected at the Department of Statistics & Computer Science, University of Kelaniya for validation. Another 100 happy and 100 not-happy images were collected for testing. Out of the 100 test cases with happy faces 75 were detected as happy and out of the 100 not-happy faces 66 were classified as not-happy. The overall accuracy of the developed system for the 200 test cases was 70.5%. This concept can be extended from a single face to multiple faces and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify