It’s Friday and in an attempt to remember what it was like when I did music cognition research here is possibly the article I am proudest of: it was a huge challenge to work on but I think it was a pretty good stab at better understanding a complex human behaviour AND testing some algorithmic approaches to solving a musical problem. Hank and I nearly came to blows because of our radically different backgrounds and personality types but all was well in the end. Fortunately Peter was good at keeping us on track ;-)

https://link.springer.com/article/10.3758/BF03200827#preview

(no paywall)

#music #InformationProcessing #PatternMatching #algorithms #modeling #CognitiveScience #MusicTechnology #memories

Data processing in music performance research: Using structural information to improve score-performance matching - Behavior Research Methods

In order to study aspects of music performance, one has to find correspondences between the performance data and a score. Locating the corresponding score note for every performance note, calledmatching, is therefore a common task. An algorithm that automates this procedure is called amatcher. Automated matching is difficult because performers make errors, performers use expressive timing, and scores are frequently underspecified. To find the best match, most matchers use information about pitch, temporal order, and the number of matched notes. We show that adding information about the musical structure of the score gives better results. However, we found that even this information was insufficient to identify some types of performance errors and that a definition of best match based only on the number of matched notes is sometimes problematic. We provide some suggestions about how to achieve greater improvements.

SpringerLink

@wlukewindsor I now wonder how much the underlying technology has changed since that was written.

Also I have a music psychologist, software engineer and AI expert went to the pub joke starting in my head for the author team.

@NatalyaD haha

Some things have changed but some have stayed the same! The software platform we all used (developed by Peter and his colleague Henkjan Honing) is pretty much gone now as a thing, along with Macintosh Common Lisp…

@wlukewindsor Have you had genAI foisted on you for this sort of thing yet?

Software really has changed since 2000. Except for university back end registry and HR systems, they're still in 1993.

@NatalyaD The problem with LLM approaches is they tend to be brute force and non-transparent - they don’t answer the kinds of questions we were interested in you need other AI techniques for that (we had a paper on that too, which didn’t address LLMs in particular as they weren’t a thing, but did address similar approaches)… you can use corpus techniques to get insights using much more sophisticated techniques than dumb LLMs, but I an not an expert in that stuff. In the end it’s advanced statistics with a big dataset…

LLMs and similar are good for sentiment analysis of qualitative data and for comparing datasets though.

@NatalyaD in other words you could train an LLM style neural net model to do the kind of matching we did but you wouldn’t know how it had done it ;-)
@NatalyaD and I’m sure someone has done it, I’m very behind on the state of the art to my shame

@wlukewindsor Yeah, the thing the techbros don't say... What the gen-AI is weakest at, or that you end up with unknown unknowns and of course the ?accuracy and hallucinations.

I looked at your uni webpage and just your list of expertise was longer than my memory stack! So no surprise you can't keep track of it all, and have A Life TM. You have important cats/children/families to enjoy.