Alba Márquez-Rodríguez

@GrunCrow
17 Followers
14 Following
18 Posts
Computer Scientist or so I try.
AI for Ecology and Conservation.
PronounsShe/They
Webpagehttps://gruncrow.github.io/
Today is the 2026 International Day of Women and Girls in Science. Let me introduce you to some of the colleagues that I work with :) #womeninscience

🆕 New paper out in Eng. applications of AI!

We propose iterative deep learning for detecting cetacean whistles in one of the noisiest marine regions: the Strait of Gibraltar 🐋🔊

While baseline models collapsed under noise, we reached 0.88 F1

We combine:
- Transfer learning from bioacoustic models (BirdNET, Perch)
- Iterative, model-assisted annotation with expert validation
- Confidence-threshold calibration

🔗 https://www.sciencedirect.com/science/article/pii/S0952197626000370

Since I finally switched to another instance here is my #introduction to the new beginning.

I am an acoustical engineer and field recordist. My main interest now is #bioacoustic #ecoacoustic and #fieldrecording.

I've joined mastodon in 2023.

Have been realy happy with the community and resources I found here over time.

One of the best things was probably learning about #faircamp and creating my own site. (thanks to @freebliss )

(1/2)

A Bird Song Detector for improving bird identification through Deep Learning: a case study from Doñana

Passive Acoustic Monitoring is a key tool for biodiversity conservation, but the large volumes of unsupervised audio it generates present major challenges for extracting meaningful information. Deep Learning offers promising solutions. BirdNET, a widely used bird identification model, has shown success in many study systems but is limited at local scale due to biases in its training data, which focus on specific locations and target sounds rather than entire soundscapes. A key challenge in bird species identification is that many recordings either lack target species or contain overlapping vocalizations, complicating automatic identification. To address these problems, we developed a multi-stage pipeline for automatic bird vocalization identification in Doñana National Park (SW Spain), a wetland of high conservation concern. We deployed AudioMoth recorders in three main habitats across nine locations and manually annotated 461 minutes of audio, resulting in 3749 labeled segments spanning 34 classes. We first applied a Bird Song Detector to isolate bird vocalizations using spectrogram-based image processing. Then, species were classified using custom models trained at the local scale. Applying the Bird Song Detector before classification improved species identification, as all models performed better when analyzing only the segments where birds were detected. Specifically, the combination of detector and fine-tuned BirdNET outperformed the baseline without detection. This approach demonstrates the effectiveness of integrating a Bird Song Detector with local classification models. These findings highlight the need to adapt general-purpose tools to specific ecological challenges. Automatically detecting bird species helps track the health of this threatened ecosystem, given birds sensitivity to environmental change, and supports conservation planning to reduce biodiversity loss.

arXiv.org

We also checked and compared BirdNET as detector with our Bird Song Detector, our detector detected more birds with fewer false positives.

🔻 FNs dropped 67%
🔻 FPs stayed under 5%

General models like BirdNET are powerful, but local adaptation is key, but sometimes even fine-tuning or custom classifiers are not enough.
By separating detection and classification, we boost precision and reduce noise in PAM workflows.

#BirdNET #EcoAI #Bioacoustics

The main innovation is that we trained a YOLOv8-based Bird Song Detector to filter only segments with bird vocalizations before running species classifiers.

The results are that for all classifiers, metrics improve when the Bird Song Detector is first applied:

#YOLOv8 #BirdSongDetector #BirdDetector

Data details:
🎙️ 461 minutes manually annotated
📄 3,749 annotated vocalizations (Frequency + Time of vocalization)
🐦 34 classes (29 species-specific)

It took 224 hours to annotate this data. For our Bird Song Detector we only used temporal annotations.

💾 Dataset is available: https://huggingface.co/datasets/GrunCrow/BIRDeep_AudioAnnotations

🌍 Doñana is a biodiversity hotspot and a key stopover for millions of migratory birds. Monitoring this ecosystem is critical — but analyzing thousands of hours of audio isn’t easy. That’s where the intersection of Ecology and AI happens.

While BirdNET works well globally, it struggles with local soundscapes, especially in noisy, unfocused recordings — leading to many false positives.

💡We propose a pipeline that starts with a Bird Song Detector to isolate vocalizations before classification.

We just updated our preprint "A Bird Song Detector for improving bird identification through Deep Learning: a case study from Doñana" to its last version 🐦🎶, which has been just accepted to Ecological Informatics

📍 Doñana National Park, Spain
📊 Passive Acoustic Monitoring
🤖 YOLOv8 + BirdNET

📄 Read it here 👉 https://arxiv.org/abs/2503.15576

#PAM #Ecoacoustics #DeepLearning #AIforEcology

A Bird Song Detector for improving bird identification through Deep Learning: a case study from Doñana

Passive Acoustic Monitoring is a key tool for biodiversity conservation, but the large volumes of unsupervised audio it generates present major challenges for extracting meaningful information. Deep Learning offers promising solutions. BirdNET, a widely used bird identification model, has shown success in many study systems but is limited at local scale due to biases in its training data, which focus on specific locations and target sounds rather than entire soundscapes. A key challenge in bird species identification is that many recordings either lack target species or contain overlapping vocalizations, complicating automatic identification. To address these problems, we developed a multi-stage pipeline for automatic bird vocalization identification in Doñana National Park (SW Spain), a wetland of high conservation concern. We deployed AudioMoth recorders in three main habitats across nine locations and manually annotated 461 minutes of audio, resulting in 3749 labeled segments spanning 34 classes. We first applied a Bird Song Detector to isolate bird vocalizations using spectrogram-based image processing. Then, species were classified using custom models trained at the local scale. Applying the Bird Song Detector before classification improved species identification, as all models performed better when analyzing only the segments where birds were detected. Specifically, the combination of detector and fine-tuned BirdNET outperformed the baseline without detection. This approach demonstrates the effectiveness of integrating a Bird Song Detector with local classification models. These findings highlight the need to adapt general-purpose tools to specific ecological challenges. Automatically detecting bird species helps track the health of this threatened ecosystem, given birds sensitivity to environmental change, and supports conservation planning to reduce biodiversity loss.

arXiv.org
I'm so surprised, for years my GitHub barely got any attention, and suddenly in the past months, some of my old repositories from my Bachelor's and Master's have started getting stars and forks. And now the repo from my latest work is also picking up!