Mert R Sabuncu

126 Followers
44 Following
33 Posts
AI for medical imaging. Prof at Cornell Tech, New York City.
@msabuncu great post! One extension based on my experience: Try to make sure your #1 university is roughly your third interview. Don't start with that because you will have to get in the flow and understand basic logistics of how these interviews roll. Universities are much more flexible in their dates than you think (they really want _you_ to come interview!). After 1 or 2 you are in great shape (and not yet exhausted by it) so that's a good time to nail that dream university's interview!

Stages of getting into a new research thing

1. No one has thought of this
2. Oh, some people have, but it’s not exactly the same
3. Oh, it’s exactly the same
4. OMG, everyone already knows about this, but they all call it something different

Happy holidays 🤣 https://youtu.be/Rh1Z1jhk6nY
Jingle Bells

YouTube
@feixia great question. First and foremost, I recommend applying broadly and independently. I also suggest not bringing this up during interviews and being hyper-focused on getting an offer. When you have offer(s), you should immediately bring up the partner situation. Most top places have mechanisms to help with partner recruitment. It varies a lot by institutions, but if the partner can fit several positions (eg depts), I’d try to pursue all options.
If you’re on the academic job market, I recommend taking the earliest possible interview date- assuming you can be prepared. Several advantages are:
1. Other places will hear about you interviewing and that’s a good thing,
2. Your hosts will be less jaded and more enthusiastic to see you (particularly at those places that do a lot of interviews),
3. You can set the benchmark that other candidates get compared against, &
4. Places that can make parallel offers will often not wait to see others.
Just dropped on arxiv. In this preprint, we present an empirical analysis of a simple Nadaraya-Watson head that can be attached to any neural network architecture. It produces highly competitive, well-calibrated classifications. The inference time is surprisingly not bad. We also present a simple clustering strategy to increase inference efficiency. Check it out: https://arxiv.org/abs/2212.03411
A Simple Nadaraya-Watson Head can offer Explainable and Calibrated Classification

In this paper, we empirically analyze a simple, non-learnable, and nonparametric Nadaraya-Watson (NW) prediction head that can be used with any neural network architecture. In the NW head, the prediction is a weighted average of labels from a support set. The weights are computed from distances between the query feature and support features. This is in contrast to the dominant approach of using a learnable classification head (e.g., a fully-connected layer) on the features, which can be challenging to interpret and can yield poorly calibrated predictions. Our empirical results on an array of computer vision tasks demonstrate that the NW head can yield better calibration than its parametric counterpart, while having comparable accuracy and with minimal computational overhead. To further increase inference-time efficiency, we propose a simple approach that involves a clustering step run on the training set to create a relatively small distilled support set. In addition to using the weights as a means of interpreting model predictions, we further present an easy-to-compute "support influence function," which quantifies the influence of a support element on the prediction for a given query. As we demonstrate in our experiments, the influence function can allow the user to debug a trained model. We believe that the NW head is a flexible, interpretable, and highly useful building block that can be used in a range of applications.

arXiv.org
Mini-rant: being at a conference right now, it is a bit disheartening to see (good, solid) work that still addresses research questions that were being solved ~10 years ago. We collectively need to slow down on publication rates, not being able to keep up with related work actively hurts the science. It is such a waste of human talent and work
I read this elsewhere: If trusted news outlets would start offering #Mastodon instances to their staff, that would bring some sort of verification and visibility. Think social.nytimes.com, social.heise.de etc. I'd love to read more from trusted journalists on the #fediverse and this could greatly help. (Again, not my idea but couldn't find the original author.) Boosts for visibility would be great. Reminder: Favorites don't help as on this other platform.
Finally, there is always uncertainty about how things will turn out once you end up going to a place. Be agile, adaptive, and proactive. Best of luck! Fin.
Once you have concrete offers, I recommend doing a lot of research to make an informed decision. Try visiting these places if you can. Emails to profs and graduate students are likelier to receive a response when from a “prospective student who has been offered admission” 16/