antonio vergari

@nolovedeeplearning
2 Followers
112 Following
212 Posts
human being | assistant prof in #ML #AI @ancAtEd
@EdinburghUni | prev @UCLA @MPI_IS | #probabilistic #inference #tractable #generative #neuro #symbolic | he/him
websitenolovedeeplearning.com

About to fly to #Germany for a short #research tour of Saarland University, Max Planck Institute for Intelligent Systems, University of Tübingen, University of Stuttgart and TU Darmstadt!

Super happy to meet old friends and colleagues @IValeraM Steffen Staab @mniepert @bschoelkopf @PhilippHennig @Matthiasbetghe @kerstingAIML @javaloyML and more!

Today our Diego Oyarzún
discussed at the #ANC #Seminar

"Opportunities and challenges for #deep #learning in Biotechnology"

and previewed the upcoming Alan Turing Institute Workshop on #AI #Engineering #Biology and Beyond, to be held at the School of Informatics in #Edinburgh in March 2023!

We have a Lecturer/Reader (Asst/Assoc Prof US) position in #ML at @InfAtEd
!

Join us in #Edinburgh to do world leading research in #ML as well as foundational #AI, #CS and #NLP.

👉https://edin.ac/3XjVfmK
🗓️ End 10 Jan 23
💬 Chris Williams, Amos Storkey and myself

The city of #Edinburgh is one of the best places to live in the world!

Happy to chat at #NeurIPS2022 with anyone interested!

Pls Share!

Lecturer or Reader in Machine Learning

Applications are invited for an academic position in machine learning in the School of Informatics at the University of Edinburgh, as part of a continuing expansion in Machine Learning and Artificial Intelligence.

University of Edinburgh Jobs

@zoubin I find the cube iconic!

It also inspired me to try to better understand and systematize several other topics!

A couple more #lectures next week for the #MLPR course on probabilistic #ML at the School of #Informatics in #Edinburgh

While going through the recordings of past lectures, I found your #cube materializing on my #slides, @zoubin!

From #routing to #hierarchical multi-label classification and user #preference learning, SPLs outperform other baselines that relax constraints or use problem-specific architectures.

Even when they predict the wrong labels, they still form a valid configuration!

Join Kareem Ahmed, Stefano Teso, Kai-wei Chang, @guy
and me at
#NeurIPS2022 to talk about #SPLs and how to have #neural #nets to behave in the way we #expect them to do!

📜https://openreview.net/forum?id=o-mxIWAY1T8
🖥️https://github.com/KareemYousrii/SPL

6/6

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview

How?

SPLs realize a #tractable product of #experts via 2 #circuits.

One encodes an #expressive distribution over the labels, the other compactly #compiles the #symbolic #constraint!

We can compute #exact #gradients because we can normalize them in one feedforward pass!

This can be of interest to many #probabilistic folks!

cc @nbranchini @avehtari @PhilippHennig

5/

Our #Semantic #Probabilistic #Layers #SPLs instead always guarantee 100% of the times that predictions satisfy the injected constraints!

They can be readily used in deep nets as they can be trained by #backprop and #maximum #likelihood #estimation.

4/

The usual recipe to inject constraints is to #relax them to make them #differentiable or to enforce them during training, by encoding them into auxiliary #losses...

This however does not guarantee that at test and #deployment time predictions will satisfy the constraints...

3/

We deal with #constraints over #structured outputs.

E.g., when predicting edges in a grid for #routing, they need to form #valid paths.

When predicting user #preferences, it shall output valid #rankings and in multi-label classification should respect the labels #hierarchy!

2/