"Lyrics Transcription for Humans: A Readability-Aware Benchmark", accepted to #ISMIR2024, is now online:
https://arxiv.org/abs/2408.06370
We evaluated more models (Whisper v3, OWSM v3.1, AudioShake v3) on our benchmark and included plots detailing what kinds of errors different models make on lyrics transcription.
Lyrics Transcription for Humans: A Readability-Aware Benchmark

Writing down lyrics for human consumption involves not only accurately capturing word sequences, but also incorporating punctuation and formatting for clarity and to convey contextual information. This includes song structure, emotional emphasis, and contrast between lead and background vocals. While automatic lyrics transcription (ALT) systems have advanced beyond producing unstructured strings of words and are able to draw on wider context, ALT benchmarks have not kept pace and continue to focus exclusively on words. To address this gap, we introduce Jam-ALT, a comprehensive lyrics transcription benchmark. The benchmark features a complete revision of the JamendoLyrics dataset, in adherence to industry standards for lyrics transcription and formatting, along with evaluation metrics designed to capture and assess the lyric-specific nuances, laying the foundation for improving the readability of lyrics. We apply the benchmark to recent transcription systems and present additional error analysis, as well as an experimental comparison with a classical music dataset.

arXiv.org
For comparison, we also evaluated on the original JamendoLyrics dataset, showing that our revisions of the reference transcripts consistently improved the results across models. On average, word errors were reduced by 5.3 % overall and by 17.4 % on Spanish.
To test our metrics on something else than just Western popular music, we also evaluated on the Schubert Winterreise Dataset (SWD). All models make a lot more errors here (19th century German spelling in the references is likely a cause), but their relative ranking stays similar.