31 Followers
6 Following
16 Posts
Researcher at https://five.ai
Previous Uni of Oxford TVG student with Prof. Phil Torr
Intern META
Cofounder https://girlswhoml.com
GirlsWhoMLhttps://girlswhoml.com/
Sometimes I feel like I'm unearthing a treasure trove of information from the past, then I just realise I'm reading a CVPR paper from 2008

How robust are unsupervised representation learning methods (e.g. SSL) to distirbution shift compared to supervised learning?

𝐒𝐡𝐨𝐫𝐭 𝐚𝐧𝐬𝐰𝐞𝐫: Quite!
𝐋𝐨𝐧𝐠 𝐚𝐧𝐬𝐰𝐞𝐫: Our #ICLR2023 paper http://arxiv.org/pdf/2206.08871.pdf

Joint work with Imant Daunhawer & Amartya Sanyal @amartya

Just finalising reviews for CVPR, what are peoples thoughts about seeing MNIST in the main paper? Red flag imo
It's almost that time of year when everyone starts mentioning how many papers they're going to read over Christmas, how many are you aiming for? I'm going to be reading a big fat 0

RT @[email protected]

Is having multiple modalities a blessing or a curse? What is a good representation? Let's find out together!
We are proud to announce the 1st hybrid workshop on Multimodal Representation Learning at ICLR2023 🚀

More info: https://mrl-workshop.github.io/iclr-2023/
Organisers: Miguel Vasco, Adrian Javaloy, Imant Daunhawer, Petra Poklukar, Isabel Valera, Danica Kragic, Yuge Shi

Home

First Workshop on Multimodal Representation Learning (ICLR 2023)

What's your favourite seed? 👀
do you ever come back to code you wrote years ago and think how did i ever make it this hard to read? Sometime I think I couldn't even design it to be this obfuscated even if i wanted to
Stable Diffusion 2.0 Release — Stability.Ai

The open source release of Stable Diffusion version 2.

Stability.Ai

Super pleased to have my paper Adaptive Temperature Scaling accepted into #AAAI

Want to improve calibration beyond temperature scaling? Then predicting per data-point temperature estimates is the way to go!

https://arxiv.org/abs/2207.06211

Sample-dependent Adaptive Temperature Scaling for Improved Calibration

It is now well known that neural networks can be wrong with high confidence in their predictions, leading to poor calibration. The most common post-hoc approach to compensate for this is to perform temperature scaling, which adjusts the confidences of the predictions on any input by scaling the logits by a fixed value. Whilst this approach typically improves the average calibration across the whole test dataset, this improvement typically reduces the individual confidences of the predictions irrespective of whether the classification of a given input is correct or incorrect. With this insight, we base our method on the observation that different samples contribute to the calibration error by varying amounts, with some needing to increase their confidence and others needing to decrease it. Therefore, for each input, we propose to predict a different temperature value, allowing us to adjust the mismatch between confidence and accuracy at a finer granularity. Furthermore, we observe improved results on OOD detection and can also extract a notion of hardness for the data-points. Our method is applied post-hoc, consequently using very little computation time and with a negligible memory footprint and is applied to off-the-shelf pre-trained classifiers. We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets, showing that producing per-data-point temperatures is beneficial also for the expected calibration error across the whole test set. Code is available at: https://github.com/thwjoy/adats.

arXiv.org