Johannes Gasteiger

147 Followers
37 Following
16 Posts

né Klicpera.

Safety for ML. ML for graphs.
Research Scientist at Google Research. Previously TUM, DeepMind, and FAIR.
Opinions my own.

Having difficulty keeping up with the latest AI safety research?

Great news: My new blog will help with just that!

"AI Safety at the Frontier" covers each month's (subjectively) best papers.

In July '24, I discuss Safetywashing, SAD, AgentDojo and much more: https://open.substack.com/pub/aisafetyfrontier/p/paper-highlights-july-24?r=1v4l78&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

Paper Highlights, July '24

Safetywashing in benchmarks, SAD, AgentDojo, vision-language model attacks, latent adversarial training, brittle steering vectors, the 2-dimensional truth, legible LLM solutions, and more LLM debate.

This results in very efficient batches and an up to 130x speedup over the baseline.

These plots show the accuracy vs. speed trade-off for 3 datasets, 3 GNNs, and multiple mini-batching methods when varying their hyperparameters. Note the log. x-axis.
4/8

IBMB uses influence scores to select the most important neighbors, instead of a random set.

It works in 2 steps:
1. Partition the output nodes (for which we want predictions) into batches.
2. For each mini-batch, select the auxiliary nodes that help most with predictions.
2/8

Luckily, the influence scores simplify to personalized PageRank (PPR) if we make some assumptions.

Step 2 then becomes an application of PPR or topic-sensitive PageRank.

Step 1 is more tricky and requires falling back to heuristics like PPR or graph partitioning.
3/8

You can sample nodes for scalable #GNN #training. But how do you do #scalable #inference?

In our latest paper (Oral #LogConference
) we introduce influence-based mini-batching (#IBMB) for both fast inference and training, achieving up to 130x and 17x speedups, respectively!

1/8 in 🧵

I’m thrilled to share that I’m starting a new position as Research Scientist in Bryan Perozzi's team at #Google Research this week!

Looking forward to all the exciting research and opportunities for positive impact!

Almost 180k new users joined #mastodon yesterday, a new record. This third #twitterMigration wave happened after Musk's Twitter 2.0 ultimatum to #Twitter workers. Each wave is stronger than the previous one. Here is my updated plot showing the three consecutive waves.

watch a 2 layer neural network learn to separate two classes to the left and right

#MachineLearning #math #MathGIF #CreativeCoding

#introduction

Hi all,

I'm a PhD student in #MachineLearning at the technical university of Munich #TUM. I'm currently working on machine learning on graphs and machine learning-driven computional chemistry.
#ml #GraphNeuralNetworks #GNNs #compchem

#introduction

I am a Ph.D. student in #MachineLearning. My research interests cover #uncertainty / #robustness in machine learning, #hierarchical / #causal inference, and #efficient machine learning :)