Multimedia Affective Analysis Group
Our work is focuses on emotional understanding of multimedia content and automatic emotion recognition. Emotional understanding of multimedia content involves developing models that can automatically predict the emotion expressed by multimedia content. For example emotion express in a song can be estimated from its acoustic content. We are also interested in multimodal emotion recognition from facial expressions and physiological responses. Our team is also affiliated with the CVML Lab at the Computer Science department.
MSc student in Neuroscience
Automatic recognition of visual interest and interestingness
This project’s main aims are twofold; first, studying knowledge emotions in multimedia search and browsing; second, developing tools for automatic recognition of knowledge emotions. We studied the underlying attributes that construct visual interest, e.g., novelty, coping potential, quality. We then learn these sub-components both from the visual content and users’ spontaneous reactions. The analysis of image and GIF interestingness demonstrated the feasibility of predicting their overall interestingness from their visual content.
Funded by: Swiss National Science Foundation
Inter-modality interaction between EEG signals and facial expressions for emotion recognition
The goal of this project is to identify the spatio-temporal patterns of EEG artifacts caused by facial expressions. We record EEG signals and facial expressions from participants in different conditions. We then use signal processing and machine learning models to automatically learn and separate muscular from cerebral activities. The muscular activities recorded on EEG signals can be then discarded for EEG analysis and used separately for emotion recognition.
Funded by: Hasler stiftung
Emotional analysis in music
Emotional expression is a very important component of music, which is central to music listening habits within a context, and therefore automatic music emotion recognition is a key to successful music recommendations and playlist generation. Analogously to such broad concepts as genre or world music styles, musical emotions are influenced by every element of musical audio. In this project we focus on computational extraction of the elements that we call "mid-level features", because they are situated between the low-level timbres, chords, beats, and high-level styles and emotions. These elements, such as melodiousness, rhythmic complexity, harmoniousness or atonality, are created through musical structure, both vertical (harmonic structure) and horizontal (repetition). We are working on methods to extract these mid-level features in western music.
Funded by: Swiss excellence scholarship awarded to A. Aljanaki
- A. Aljanaki, Y.-H. Yang, M. Soleymani. Developing a Benchmark for Emotional Analysis in Music, PLOS ONE, to appear, 2017.
- M. Soleymani, F. Villaro-Dixon, T. Pun, G. Chanel. Toolbox for Emotional fEAture extraction from Physiological signals (TEAP) Frontiers in ICT - Human-Media Interaction, 2017.
- M. Soleymani, S. Asghari-Esfeden, Y. Fu, M. Pantic. Analysis of EEG signals and facial expressions for continuous emotion detection, IEEE Transactions on Affective Computing, 7(1): pp. 17-28, 2016.
- M. Gygli, M. Soleymani. Analyzing and Predicting GIF Interestingness, ACM International Conference on Multimedia (MM), Amsterdam, the Netherlands, 2016.
- M. Soleymani. The Quest for Visual Interest, ACM International Conference on Multimedia (MM), Brisbane, Australia, 2015.
- M. Soleymani, A. Aljanaki, F. Wiering, R.C. Veltkamp. Content-based music recommendation using underlying music preference structure, IEEE International Conference on Multimedia and Expo (ICME), Torino, Italy, 2015.