#CFP

MHDW 2026 – 15th Mining Humanistic Data Workshop

Interdisciplinary workshop on data mining, AI, and computational methods applied to humanistic data (linguistic, historical, musical, social, educational). Topics include machine learning, knowledge discovery, visualisation, and music information retrieval.

πŸ“ Chania, Greece & Online
πŸ“… 16–19 July 2026

Deadline: 19/04/2026

https://conferences.cmodlab-iu.edu.gr/mhdw2026/

#DigitalHumanities #DataMining #MusicInformationRetrieval #ComputationalMusicology

Mining Humanistic Data Workshop 2026

#CFP

26th International Society for Music Information Retrieval Conference (#ISMIR2025)

πŸ—“οΈ 21–25 September 2025 | πŸ“ Daejeon, Korea & Online

ISMIR 2025 invites contributions in all areas of #MusicInformationRetrieval, including computational music analysis, algorithms, and applications. Topics include #MIRfundamentals, #MachineLearning, #ComputationalEthnomusicology, and #MusicAI.

Deadline: 21/03/2025

https://ismir2025.ismir.net/

#Musicology #CognitiveScience #MusicTechnology #DataScience #ISMIR

ISMIR 2025

Steinberg Media Technologies just previewed SpectralLayers 11 and it blew my mind 🀯 The unmix for Sax & Brass section and even Lead vocals vs Backing vocals! And it all sounds so clean.

https://www.youtube.com/live/2BoEgBGiafM?feature=shared

#spectralayers #steinberg #audioplugin #machinelearning #stemseparation #mir #musicinformationretrieval #musictech #vst

SpectraLayers 11 World Premiere

Join Dom Sigalas at this streamed preview event to discover the exceptional new features in SpectraLayers 11. From its spectacular integration of AI and prec...

YouTube
It looks like "foundation models" for music audio are here, or almost here: "LLark: A Multimodal Foundation Model for Music" https://arxiv.org/abs/2310.07160 This is impressive work from Spotify, especially since it appears to be fully open-source. #deeplearning #musicinformationretrieval #LLM #audio
LLark: A Multimodal Instruction-Following Language Model for Music

Music has a unique and complex structure which is challenging for both expert humans and existing AI systems to understand, and presents unique challenges relative to other forms of audio. We present LLark, an instruction-tuned multimodal model for \emph{music} understanding. We detail our process for dataset creation, which involves augmenting the annotations of diverse open-source music datasets and converting them to a unified instruction-tuning format. We propose a multimodal architecture for LLark, integrating a pretrained generative model for music with a pretrained language model. In evaluations on three types of tasks (music understanding, captioning, reasoning), we show that LLark matches or outperforms existing baselines in music understanding, and that humans show a high degree of agreement with its responses in captioning and reasoning tasks. LLark is trained entirely from open-source music data and models, and we make our training code available along with the release of this paper. Additional results and audio examples are at https://bit.ly/llark, and our source code is available at https://github.com/spotify-research/llark .

arXiv.org
Ich droppe mal den Begriff #MusicInformationRetrieval fΓΌr mehr Sichtbarkeit in der deutschen #Musikwissenschaft, #Musikforschung und auf der #gfmsaar23.

Finally, if you are interest/curious about how I used this in my research, checkout our latest pre-print about quantifying the evolution of harmony and innovation in Western Classical Music!

https://arxiv.org/abs/2308.03224

#scientificresearch #datascience #musicevolution #musicinformationretrieval

Quantifying the evolution of harmony and novelty in western classical music

Music is a complex socio-cultural construct, which fascinates researchers in diverse fields, as well as the general public. Understanding the historical development of music may help us understand perceptual and cognition, while also yielding insight in the processes of cultural transmission, creativity, and innovation. Here, we present a study of musical features related to harmony, and we document how they evolved over 400 years in western classical music. We developed a variant of the center of effect algorithm to call the most likely for a given set of notes, to represent a musical piece as a sequence of local keys computed measure by measure. We develop measures to quantify key uncertainty, and diversity and novelty in key transitions. We provide specific examples to demonstrate the features represented by these concepts, and we argue how they are related to harmonic complexity and can be used to study the evolution of harmony. We confirm several observations and trends previously reported by musicologists and scientists, with some discrepancies during the Classical period. We report a decline in innovation in harmonic transitions in the early classical period followed by a steep increase in the late classical; and we give an explanation for this finding that is consistent with accounts by music theorists. Finally, we discuss the limitations of this approach for cross-cultural studies and the need for more expressive but still tractable representations of musical scores, as well as a large and reliable musical corpus, for future study.

arXiv.org

In what key is 'Hey Joe' by Jimi Hendrix?

Inspired by
Adam Neely's video, I made a post with the implementation of the Center of Effect algorithm to find an answer in a more quantitative way.

https://spiralizing.github.io/DSEntries/CenterOfEffect/

#musicinformationretrieval #music #datascience #julia

What Key is 'Hey Joe' in?

sagt mal #Musikwissenschaft - bubble, gibts auch Leute hier die #MusicInformationRetrieval #MusInterpretationsforschung oder #DigitaleMusikwissenschaft machen? Freu mich ΓΌber Nerdtalk zu Tools & co :)