Maybe a #HotTake. Increasingly, #PhD advisors seem to delegate their advising and mentoring roles to reviewers in #MachineLearning venues.
QC, please?!
| Blogger | https://blog.sarvajna.in/ |
| Scholar | https://raghavian.github.io/ |
| Runner | https://www.strava.com/athletes/20361278 |
Maybe a #HotTake. Increasingly, #PhD advisors seem to delegate their advising and mentoring roles to reviewers in #MachineLearning venues.
QC, please?!
Running #ChatGpt on Linux terminal using a wrapper, and works quite well [1].
Ofc, I am using my existing #OpenAI account; so no bypassing that.
[1] https://www.linuxuprising.com/2023/01/use-chatgpt-from-command-line-with-this.html
My #University has a policy of using #Microsoft products, unfortunately! #Teams, for instance, is used extensively for teaching and is the worst product that's out there.
I am having to invent new hacks to use multiple organization logins (without having to switch accounts) is to run multiple instances. And that's just eww.
#SelfSupervision with ~10k parameters & < 10 min training?
Check out our latest work "#Efficient Self-Supervision using Patch-based Contrastive Learning for #Histopathology #Image #Segmentation", to be presented at the #NorthernLights #DeepLearning Conference this week.
The first author Nicklas Boserup, who is currently a 1st year's MSc student from UCPH
, will give an oral presentation at #NLDL this week.
Paper: https://arxiv.org/abs/2208.10779
Code: https://github.com/nickeopti/bach-contrastive-segmentation
Learning discriminative representations of unlabelled data is a challenging task. Contrastive self-supervised learning provides a framework to learn meaningful representations using learned notions of similarity measures from simple pretext tasks. In this work, we propose a simple and efficient framework for self-supervised image segmentation using contrastive learning on image patches, without using explicit pretext tasks or any further labeled fine-tuning. A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images and obtain confidence maps which capture the network's belief about the objects belonging to the same class. Positive- and negative- patches are sampled based on the average entropy in the confidence maps for contrastive learning. Convergence is assumed when the information separation between the positive patches is small, and the positive-negative pairs is large. The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets, which is orders of magnitude smaller than the relevant self-supervised methods to attain similar performance. We evaluate the proposed method for the task of segmenting nuclei from two histopathology datasets, and show comparable performance with relevant self-supervised and supervised methods.