#CFP

12th International Conference on Digital Libraries for Musicology

📍 Sogang University, Seoul
📅 26/09/2025

In association with #ISMIR2025, DLfM invites papers on digital libraries, MIR, music encodings, and cultural heritage. Hybrid format; in-person encouraged.

Deadline (EXTENDED): 16/05/2025

https://dlfm.web.ox.ac.uk/

#DigitalHumanities #Musicology #MIR #ISMIR #DigitalMusicology #ComputationalMusicology #ComputationalMusicProcessing

12th International Conference on Digital Libraries for Musicology

#CFP

26th International Society for Music Information Retrieval Conference (#ISMIR2025)

🗓️ 21–25 September 2025 | 📍 Daejeon, Korea & Online

ISMIR 2025 invites contributions in all areas of #MusicInformationRetrieval, including computational music analysis, algorithms, and applications. Topics include #MIRfundamentals, #MachineLearning, #ComputationalEthnomusicology, and #MusicAI.

Deadline: 21/03/2025

https://ismir2025.ismir.net/

#Musicology #CognitiveScience #MusicTechnology #DataScience #ISMIR

ISMIR 2025

📰 “"Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional Representation”

🔗https://arxiv.org/abs/2407.20955

#ISMIR #MER #MIR #Generative #Music

Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional Representation

Managing the emotional aspect remains a challenge in automatic music generation. Prior works aim to learn various emotions at once, leading to inadequate modeling. This paper explores the disentanglement of emotions in piano performance generation through a two-stage framework. The first stage focuses on valence modeling of lead sheet, and the second stage addresses arousal modeling by introducing performance-level attributes. To further capture features that shape valence, an aspect less explored by previous approaches, we introduce a novel functional representation of symbolic music. This representation aims to capture the emotional impact of major-minor tonality, as well as the interactions among notes, chords, and key signatures. Objective and subjective experiments validate the effectiveness of our framework in both emotional valence and arousal modeling. We further leverage our framework in a novel application of emotional controls, showing a broad potential in emotion-driven music generation.

arXiv.org
New journal paper by Kristina Matrossova et al published at #TISMIR #ISMIR https://transactions.ismir.net/articles/10.5334/tismir.158 about the music that sets us apart, and the one that describe us most. Joint work by #Deezer Research and #cnrs as part of the joint french ANR RECORDS project

"Decoding and Visualising Intended Emotion in an Expressive Piano Performance"

presented at #ISMIR 2022 Late-breaking demo session

https://arxiv.org/abs/2303.01875

Decoding and Visualising Intended Emotion in an Expressive Piano Performance

Expert musicians can mould a musical piece to convey specific emotions that they intend to communicate. In this paper, we place a mid-level features based music emotion model in this performer-to-listener communication scenario, and demonstrate via a small visualisation music emotion decoding in real time. We also extend the existing set of mid-level features using analogues of perceptual speed and perceived dynamics.

arXiv.org