Ondřej Cífka

31 Followers
42 Following
12 Posts
#ML/#AI researcher with interests in #NLProc, #music and #audio processing, #Transformers and #generative models. Research Scientist at AudioShake.
Websitehttps://ondrej.cifka.com
GitHubhttps://cifkao.github.io
Twitterhttps://twitter.com/cifkao

🚀 We’re looking for a Master’s student to join our research team @ audioshake.ai for a 6-month internship!

Deep dive into PyTorch, optimize our SOTA audio models, and help make ML sound better (and faster) 🎶

Based in Paris or remote 🇫🇷 → https://audioshake.notion.site/Internship-ML-Optimization-205b133ddefe8025a9f2de74d30d4d38 #AudioML #Internship

Internship: ML Optimization | Notion

Location: Paris preferred (remote within France/EU possible)

audioshake on Notion
To test our metrics on something else than just Western popular music, we also evaluated on the Schubert Winterreise Dataset (SWD). All models make a lot more errors here (19th century German spelling in the references is likely a cause), but their relative ranking stays similar.
For comparison, we also evaluated on the original JamendoLyrics dataset, showing that our revisions of the reference transcripts consistently improved the results across models. On average, word errors were reduced by 5.3 % overall and by 17.4 % on Spanish.
"Lyrics Transcription for Humans: A Readability-Aware Benchmark", accepted to #ISMIR2024, is now online:
https://arxiv.org/abs/2408.06370
We evaluated more models (Whisper v3, OWSM v3.1, AudioShake v3) on our benchmark and included plots detailing what kinds of errors different models make on lyrics transcription.
Lyrics Transcription for Humans: A Readability-Aware Benchmark

Writing down lyrics for human consumption involves not only accurately capturing word sequences, but also incorporating punctuation and formatting for clarity and to convey contextual information. This includes song structure, emotional emphasis, and contrast between lead and background vocals. While automatic lyrics transcription (ALT) systems have advanced beyond producing unstructured strings of words and are able to draw on wider context, ALT benchmarks have not kept pace and continue to focus exclusively on words. To address this gap, we introduce Jam-ALT, a comprehensive lyrics transcription benchmark. The benchmark features a complete revision of the JamendoLyrics dataset, in adherence to industry standards for lyrics transcription and formatting, along with evaluation metrics designed to capture and assess the lyric-specific nuances, laying the foundation for improving the readability of lyrics. We apply the benchmark to recent transcription systems and present additional error analysis, as well as an experimental comparison with a classical music dataset.

arXiv.org
Looking forward to presenting our lyrics transcription benchmark at #ISMIR2024 in San Francisco! Our paper grew from last year's LBD to a full paper with more results, which has now been accepted, so stay tuned!
https://audioshake.github.io/jam-alt/
Jam-ALT: A Formatting-Aware Lyrics Transcription Benchmark

Jam-ALT: A Formatting-Aware Lyrics Transcription Benchmark
Had a lot of fun with deep learning for geospatial data and also released this library along the way:
gps2var: Fast reading of geospatial variables by GPS coordinates
https://github.com/cifkao/gps2var
GitHub - cifkao/gps2var: 🌍 Fast reading of geospatial variables by GPS coordinates 📍

🌍 Fast reading of geospatial variables by GPS coordinates 📍 - GitHub - cifkao/gps2var: 🌍 Fast reading of geospatial variables by GPS coordinates 📍

GitHub

So excited to finally share this really interesting project I worked on last year! 🐾 We trained a Transformer on animal trajectories (world-wide GPS location data for >50 species 🦆🦅🦓🦬🐢) and studied how movement history and environmental variables affect predictions.

https://doi.org/10.1101/2023.03.05.531080
https://github.com/cifkao/moveformer

Specifically, to compute the output distributions for all positions in a text of length N and all context lengths up to a max length C, we just need to run inference along a sliding window of length C, i.e. do N forward passes on segments of length ≤C. (see the illustration in my previous post)

Notice that this is a lot like generating a new sequence from the model (the naïve way)! 🧵4/4

The technique works with any causal LM, as long as it was trained to accept arbitrary text fragments (not necessarily starting at sentence or document boundary), which happens to be how large #GPT-like models (#GPT2, #GPT3, #GPTJ, ...) are usually trained.

The main trick is in realizing that the necessary probabilities can be computed efficiently by running the model along a sliding window. 🧵3/4

In this plot, we show on an example how two different metrics (LM loss and a metric based on KL divergence) change as the context length increases (from right to left). Some context tokens cause abrupt changes, and we suggest the interpretation that these tokens bring important information not already covered by shorter contexts. 🧵2/4