#Neural nets struggle to #guarantee that their predictions always conform to our prior knowledge!

We devise a drop-in replacement layer that injects given #symbolic #constraints while retaining #exact and #efficient gradient optimization in our #NeurIPS paper:

https://openreview.net/forum?id=o-mxIWAY1T8

1/๐Ÿงต

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview

We deal with #constraints over #structured outputs.

E.g., when predicting edges in a grid for #routing, they need to form #valid paths.

When predicting user #preferences, it shall output valid #rankings and in multi-label classification should respect the labels #hierarchy!

2/

The usual recipe to inject constraints is to #relax them to make them #differentiable or to enforce them during training, by encoding them into auxiliary #losses...

This however does not guarantee that at test and #deployment time predictions will satisfy the constraints...

3/

Our #Semantic #Probabilistic #Layers #SPLs instead always guarantee 100% of the times that predictions satisfy the injected constraints!

They can be readily used in deep nets as they can be trained by #backprop and #maximum #likelihood #estimation.

4/

How?

SPLs realize a #tractable product of #experts via 2 #circuits.

One encodes an #expressive distribution over the labels, the other compactly #compiles the #symbolic #constraint!

We can compute #exact #gradients because we can normalize them in one feedforward pass!

This can be of interest to many #probabilistic folks!

cc @nbranchini @avehtari @PhilippHennig

5/

From #routing to #hierarchical multi-label classification and user #preference learning, SPLs outperform other baselines that relax constraints or use problem-specific architectures.

Even when they predict the wrong labels, they still form a valid configuration!

Join Kareem Ahmed, Stefano Teso, Kai-wei Chang, @guy
and me at
#NeurIPS2022 to talk about #SPLs and how to have #neural #nets to behave in the way we #expect them to do!

๐Ÿ“œhttps://openreview.net/forum?id=o-mxIWAY1T8
๐Ÿ–ฅ๏ธhttps://github.com/KareemYousrii/SPL

6/6

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview