Deezer Research

125 Followers
18 Following
35 Posts
Researchers and engineers from Deezer. Sharing news about recent studies, papers and conference attendences.
Websitehttps://research.deezer.com
Publications | Deezer Research

From Sunday on, the annual #ISMIR2025 conference is taking place in Daejeon. #Deezer is a proud sponsor of the event since 2018 and this year again, there will be many opportunities to meet and talk with Deezer Researchers: Benjamin Martin Gabriel Meseguer Brocal Kamil Akesbi Yuexuan Kong. Things with the Tutorial session T2 Sunday 21/09 09:00. With "Self-supervised Learning for Music - An Overview and New Horizons" course by  Julien Guinot, Alain Riou, Yuexuan Kong, Marco Pasini, Gabriel Meseguer Brocal and Stefan Lattner Then on Monday, no less than three papers will be presented:  * "AI-Generated Song Detection via Lyrics Transcripts": session 1, Monday 22/09 09:00 by Markus Frohmann Elena E., Gabriel Meseguer Brocal, Markus Schedl and Romain H. * , "PeakNetFP: Peak-based Neural Audio Fingerprinting Robust to Extreme Time Stretching": session 2, Monday 22/09 14:30 by Guillem C. Benjamin Martin, Emilio Molina Martínez, Xavier Serra and Romain H. * "Emergent musical properties of a transformer under contrastive self-supervised learning": session 2, Monday 22/09 14:30 by Yuexuan Kong, Gabriel Meseguer Brocal Vincent Lostanlen Mathieu Lagrange and Romain H. And don't miss the musical session which is curated by our own Gabriel Meseguer Brocal and our good friend Harin Lee Our fourth paper "A Fourier Explanation of AI-music Artifacts" by Darius A. Gabriel Meseguer Brocal Kamil Akesbi and Romain H., will be presented in session 7, Thursday 25/09 09:00. Later in the day, you'll find our LBD poster "STOMP! Self-supervised beat induction by matching pulses" by Yuexuan Kong Vincent Lostanlen and Gabriel Meseguer Brocal As always, you'll find the papers and lins to code and data on our website: https://lnkd.in/eBk3FJeJ 

With ACM RecSys25 right around the corner, we look forward to reconnecting… | Deezer Research

With ACM RecSys25 right around the corner, we look forward to reconnecting with friends and old colleagues in Prague! This year, several Deezer researchers will be presenting their latest work: Tuesday – Paper Session 1 Bruno Sguerra will kick things off with "Biases in LLM-Generated Musical Taste Profiles for Recommendation", a study of fairness issues that arise when LLMs are used to generate user profiles. This work was carried out in collaboration with Elena E., Harin Lee, and Manuel M.. Tuesday - Posters Session 1 - "Beyond the Past: Leveraging Audio and Human Memory for Sequential Music Recommendation", by Viet-Anh TRAN, Bruno Sguerra, Gabriel Meseguer Brocal, Léa Briand, and Manuel M.. This work explores how to move beyond the limitations of past-only cognitive models by extrapolating music activation from audio similarity. 2 - "Just Ask for Music (JAM): Multimodal and Personalized Natural Language Music Recommendation", by Elena E., our friend Alessandro B. Melchiorre and his colleagues from JKU - Institute of Computational Perception. This work uses user preferences to translate query projections for more personalized music recommendation. Wednesday – Industry Symposium Léa Briand will join the session "The Perils of Production: Building Robust Learning Systems in the Wild." Friday - EARL workshop Clémence V. will present a poster on “Text2Playlist: Generating Personalized Playlists from Text on a Music Streaming Platform”, a framework for query-specific, personalized playlist generation deployed at Deezer! Work with Mathieu Delcluze and colleagues. Hope to catch up in Prague! ✨

New paper acceptance announcement. This time in #NLP and for the prestigious #NAACL25 conference. https://www.linkedin.com/feed/update/urn:li:activity:7293575268433055744
2406.11380 | Deezer Research

Happy to announce that our paper "Evaluating LLMs for Quotation Attribution in Literary Texts: A Case Study of LLaMa3" by Gaspard Michel, Elena E., Romain H. and Christophe Cerisara has been accepted for publication at the #NAACL25 Conference to be held in Albuquerque in April. Preprint is available on arxiv: https://lnkd.in/eHXt2HxN as well as code to reproduce all experiments https://lnkd.in/ePbh_F6B #nlp

Deezer Research on LinkedIn: #ismir2024 #wimir

Get ready for one of the year’s biggest events in music science! Starting Sunday, November 10th, San Francisco will host @ismir #ISMIR2024, the annual…

Deezer Research on LinkedIn: #deezer #recsys2024 #bari

Next week, #Deezer research team members will be attending #RecSys2024 in #Bari with many exciting presentations and talks planned

Deezer Research on LinkedIn: Transformers Meet ACT-R: Repeat-Aware and Sequential Listening Session…

Greetings, This year, we have two papers accepted for the ACM RecSys 2024 Conference https://lnkd.in/gwiqZs2e The first one, "Transformers Meet ACT-R:…

Second one:

STONE : Self-supervised Tonality Estimator by Yuexuan Kong, Vincent Lostanlen Gabriel Meseguer Brocal Stella Wong, Mathieu Lagrange and Romain Hennequin https://arxiv.org/abs/2407.07408 STONE is the first self-supervised tonality estimator using the ChromaNet architecture.

STONE: Self-supervised Tonality Estimator

Although deep neural networks can estimate the key of a musical piece, their supervision incurs a massive annotation effort. Against this shortcoming, we present STONE, the first self-supervised tonality estimator. The architecture behind STONE, named ChromaNet, is a convnet with octave equivalence which outputs a key signature profile (KSP) of 12 structured logits. First, we train ChromaNet to regress artificial pitch transpositions between any two unlabeled musical excerpts from the same audio track, as measured as cross-power spectral density (CPSD) within the circle of fifths (CoF). We observe that this self-supervised pretext task leads KSP to correlate with tonal key signature. Based on this observation, we extend STONE to output a structured KSP of 24 logits, and introduce supervision so as to disambiguate major versus minor keys sharing the same key signature. Applying different amounts of supervision yields semi-supervised and fully supervised tonality estimators: i.e., Semi-TONEs and Sup-TONEs. We evaluate these estimators on FMAK, a new dataset of 5489 real-world musical recordings with expert annotation of 24 major and minor keys. We find that Semi-TONE matches the classification accuracy of Sup-TONE with reduced supervision and outperforms it with equal supervision.

arXiv.org

Paper news! Two submissions by #deezer team members have been accepted at ISMIR Conference:

From Real to Cloned Singer Identification by Dorian Desblancs, Gabriel Meseguer Brocal, Romain Hennequin and Manuel Moussallam: https://arxiv.org/abs/2407.08647 where we investigate ways to identify artist's cloned voices in mixtures.

From Real to Cloned Singer Identification

Cloned voices of popular singers sound increasingly realistic and have gained popularity over the past few years. They however pose a threat to the industry due to personality rights concerns. As such, methods to identify the original singer in synthetic voices are needed. In this paper, we investigate how singer identification methods could be used for such a task. We present three embedding models that are trained using a singer-level contrastive learning scheme, where positive pairs consist of segments with vocals from the same singers. These segments can be mixtures for the first model, vocals for the second, and both for the third. We demonstrate that all three models are highly capable of identifying real singers. However, their performance deteriorates when classifying cloned versions of singers in our evaluation set. This is especially true for models that use mixtures as an input. These findings highlight the need to understand the biases that exist within singer identification systems, and how they can influence the identification of voice deepfakes in music.

arXiv.org
Our Head of Research Romain Hennequin will be at @MusicTechBE to talk about #AImusic and the latests work from the team on the topic. https://www.linkedin.com/posts/deezer_deezer-livethemusic-wallifornia-activity-7214977611179900929-8-RB?utm_source=share&utm_medium=member_desktop
Deezer on LinkedIn: #deezer #livethemusic #wallifornia

Deezer is heading to Wallifornia to share our expertise and discuss the latest developments in music & tech. Romain H., Head of Research at Deezer, will be…

The deadline to submit your research to the NLP4MusA #ISMIR2024 companion workshop has been extended to July 12th: https://sites.google.com/view/nlp4musa-2024/home . Both academic and industry submissions at the crossroads of #NLP and #MIR are welcomed
NLP4MusA 2024

Program tba!