Having a great time at #NLDL2024! Some really interesting talks on topics as diverse as image segmentation (uncertainty quantification) to new types of image-processing layers (spiking neural network) and more!

I'm learning a lot - so glad I came! So many diverse perspectives and experiences have come together for this conference. Full write-up after the conference as I don't have time to sit downn & write a blog post right now.

#conference #ai #ml #NLDL

NLDL 2024: My rainfall radar paper is out!

Stardust | Starbeamrainbowlabs' Blog

New blog post: I've submitted a paper on my rainfall radar research to #NLDL 2024!

Title: Towards AI for approximating hydrodynamic simulations as a 2D segmentation task

https://starbeamrainbowlabs.com/blog/article.php?article=posts%2F532-nldl-submission.html

#paper

I've submitted a paper on my rainfall radar research to NLDL 2024!

Stardust | Starbeamrainbowlabs' Blog

I've submitted a paper to #nldl2024! This is my first real #conference submission.

Title: Towards AI for approximating hydrodynamic simulations as a 2D segmentation task

Alternate title: Abusing image segmentation to approximate physics based simulations /lh

https://www.nldl.org/

#AI #imagesegmentation #hydrodynamic #deeplabv3plus #nldl

NLDL 2024

Deep learning is an emerging subfield in machine learning that has in recent years achieved state-of-the-art performance in image classification, object detection, segmentation, time series prediction and speech recognition to name a few. This conference will gather researchers both on a national

I've got A 6-page #ai / #deeplearning conference paper I'm putting the finishing touches to that I plan to submit to #nldl2024!

If accepted, this will be my first full academic conference!

It's about a proof of concept using rainfall radar data to predict water depth in 2d. Followers of my PhD update blog post series will remember it as a recurring theme.

Submission deadline is 2023-09-01: https://www.nldl.org/call-for-papers

#nldl

NLDL 2024 - Call for Papers

Important dates Submission: September 1st, 2023 (23:59 AoE) Review period starts: September 25, 2023 Review period ends: October 15, 2023 Rebuttal for authors: October 16 to 23, 2023 Post-rebuttal discussion (final decisions): October 24 - 31, 2023 Author notification: November 06, 2023

#PublicationAlert 📢

#SelfSupervision with ~10k parameters & < 10 min training?

Check out our latest work "#Efficient Self-Supervision using Patch-based Contrastive Learning for #Histopathology #Image #Segmentation", to be presented at the #NorthernLights #DeepLearning Conference this week.

The first author Nicklas Boserup, who is currently a 1st year's MSc student from UCPH
, will give an oral presentation at #NLDL this week.

Paper: https://arxiv.org/abs/2208.10779
Code: https://github.com/nickeopti/bach-contrastive-segmentation

Efficient Self-Supervision using Patch-based Contrastive Learning for Histopathology Image Segmentation

Learning discriminative representations of unlabelled data is a challenging task. Contrastive self-supervised learning provides a framework to learn meaningful representations using learned notions of similarity measures from simple pretext tasks. In this work, we propose a simple and efficient framework for self-supervised image segmentation using contrastive learning on image patches, without using explicit pretext tasks or any further labeled fine-tuning. A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images and obtain confidence maps which capture the network's belief about the objects belonging to the same class. Positive- and negative- patches are sampled based on the average entropy in the confidence maps for contrastive learning. Convergence is assumed when the information separation between the positive patches is small, and the positive-negative pairs is large. The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets, which is orders of magnitude smaller than the relevant self-supervised methods to attain similar performance. We evaluate the proposed method for the task of segmenting nuclei from two histopathology datasets, and show comparable performance with relevant self-supervised and supervised methods.

arXiv.org