A musical system for emotional expression

作者:

Highlights:

摘要

The automatic control of emotional expression in music is a challenge that is far from being solved. This paper describes research conducted with the aim of developing a system with such capabilities. The system works with standard MIDI files and develops in two stages: the first offline, the second online. In the first stage, MIDI files are partitioned in segments with uniform emotional content. These are subjected to a process of features extraction, then classified according to emotional values of valence and arousal and stored in a music base. In the second stage, segments are selected and transformed according to the desired emotion and then arranged in song-like structures.The system is using a knowledge base, grounded on empirical results of works of Music Psychology that was refined with data obtained with questionnaires; we also plan to use data obtained with other methods of emotional recognition in a near future. For the experimental setups, we prepared web-based questionnaires with musical segments of different emotional content. Each subject classified each segment after listening to it, with values for valence and arousal. The modularity, adaptability and flexibility of our system’s architecture make it applicable in various contexts like video-games, theater, films and healthcare contexts.

论文关键词:Knowledge-based system,Automatic music production,Expression of emotions,Music and emotions,Real-time system

论文评审过程:Received 9 March 2010, Revised 10 May 2010, Accepted 14 July 2010, Available online 18 July 2010.

论文官网地址:https://doi.org/10.1016/j.knosys.2010.06.006