Mathias Niepert

384 Followers
138 Following
8 Posts
Professor @ University of Stuttgart and Scientific Advisor (Researcher) @ NEC Labs Europe. Geometric Deep Learning, NLP, and ML for science.

Two-stage pretraining for chemicals:

1. Masked language model
2. Predict chemical properties

@omendezlucio Nicolaou, @bertonearnshaw

https://arxiv.org/abs/2211.0265

Interpolated polynomial multiple zeta values of fixed weight, depth, and height

We define the interpolated polynomial multiple zeta values as a generalization of all of multiple zeta values, multiple zeta-star values, interpolated multiple zeta values, symmetric multiple zeta values, and polynomial multiple zeta values. We then compute the generating function of the sum of interpolated polynomial multiple zeta values of fixed weight, depth, and height.

arXiv.org

We're happy to announce that our paper "Towards high-accuracy deep learning inference of compressible turbulent flows over aerofoils" is published now in "Computers & Fluids" and can be enjoyed at: https://authors.elsevier.com/a/1g3shAQO4pqSu , congrats Liwei

The source code is also available at: https://github.com/tum-pbs/coord-trans-encoding

The trained neural network yields results that resolve all necessary structures such as shocks, and has an average error of less than 0.3% for turbulent transonic cases.

Our paper on adversarially-robust regression was accepted to SaTML 2023 (https://satml.org) -- the first ever IEEE Conference on Secure and Trustworthy Machine Learning!

I'm really excited about this conference and hoping to see it take off. There's so much important work to do in this area.
#SaTML #AdversarialML

IEEE SaTML

IEEE Conference on Secure and Trustworthy Machine Learning

We recently put out a position paper titled "Neurosymbolic Programming for Science"
https://arxiv.org/abs/2210.05050

This position is informed by our experience collaborating with scientists: science is an iterative process of analyzing data, proposing hypotheses, and conducting experiments. Because scientists reason more readily in symbolic terms, it is important to develop frameworks that natively inherit the both the flexibility of neural networks and the rich semantics of symbolic models.

Title: "Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation"

Key ideas:
💡 Training attacks are *highly influential* to their targets
💡 Targets have *anomalous influence distributions*
💡 Attacks are the targets’ *top influences*

In other words: Stopping training set attacks is an influence estimation problem!

#Introduction I am hoping to join a welcoming and diverse community here. What I liked about Twitter was the convos about ML, the new papers, ideas, and feeling connected to other researchers across the globe. Looking forward to rebuilding this on an open platform without the volatile billionaire bit.

I’m a professor of CS and will occasionally post something about geometric (graph) deep learning, attempts to bridge discrete and continuous learning, and applications in the sciences.