Now Chris Williams is presenting a principled probabilistic way to train #capsule networks (originally by @geoffreyhinton
) as generative models.

A variational approach to train generative part-based generative models.

#ANC #Seminar #ML #AI #probabilistic #generative #modeling

@nolovedeeplearning Cool! I thought the research on capsule networks had died out since apparently they are expensive to train and don't provide enough benefits compared to CNNs.

@mathieualain not true, the biggest benefit is having a clear part based decomposition.

The more structure, the less examples you need

@nolovedeeplearning Interesting! I need to learn more about them then šŸ˜…
@mathieualain @nolovedeeplearning we need models that ā€œcompartmentalizeā€ the knowledge locally, while keeping it distributed for robustness reasons. It’s a tough problem but little is known how and whether this happens in existing networks.

@emtiyaz @mathieualain

Totally agree!

Looking forward to hear from you @emtiyaz on this recent paper where you can have a #differentiable #layer that encodes #logical #constraints and provably #guarantees that the predictions of a #neural net satisfy the constraint!

By making everything modular and compartimentalized (plug&play) you can easily integrate #symbolic and #neural #reasoning

https://openreview.net/forum?id=o-mxIWAY1T8

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview

@nolovedeeplearning @mathieualain thanks sharing. It’s a beautifully written paper. Congratulations!

I wish I could understand sec 3 better. Specifically how PC enable tractable inference, compared to eg decomposable PGMs (used to work on that earlier). I hope someday I get time to do it. Is there a tutorial somewhere from people like me who know PGMs well?

@emtiyaz @mathieualain

tl;dr: classical #PGMs can be compiled into #circuits #PCs and then #overparameterized to increase #expressiveness but retaining #tractability.

A gentle introduction for #probabilistic and #PGM folks is this paper https://web.cs.ucla.edu/~guyvdb/papers/ProbCirc20.pdf

A companion #video #tutorial is here https://www.youtube.com/watch?v=2RAG5-L9R70

I will do a new version of this tutorial with YooJung Choi and Robert Peharz at #NeurIPS2022 in a month!

Showing latest advancements in #reliable #inference with #PCs!

@nolovedeeplearning @emtiyaz Do they have known limitations compared to PGMs?
@nolovedeeplearning @emtiyaz Also, what is belief propagation on circuits? Is it a lot different?

@mathieualain @emtiyaz computing marginals can be done in a single #feedforward pass of the #circuit!

another way to see what a circuit coming from a #PGM is: a data #structure that compactly encodes the trace of the sum-product algorithm. #dynamic #programming

@mathieualain @emtiyaz lots of!

For example, integrating over arbitrary polytopes can be hard.

Or even compiling a #PGM might result in a circuit whose size is worse case exponential in the treewidth of the #PGM

Note that you can still bound the treewidth and increase expressiveness by a richer parametrization (more #discrete #latent #variables)

@nolovedeeplearning @mathieualain thanks. Just read the intro and it sounds magical. I had always thought that it should be possible to convert intractable PGMs into much larger tree-like Graphs where linear time inference is possible. And it sounded like that to me and I want to learn more.

If you have time please come to Japan to visit and teach us more. I think using PCs as posterior candidate is a straightforward idea which I have been dreaming for many years.

@emtiyaz @nolovedeeplearning @mathieualain There is indeed already some promising work in this direction and part of my fellowship at Aalto is centred around this question. Good to see that others are interested in this.
@emtiyaz @nolovedeeplearning @mathieualain Also, Antonio is a fun guy and a great speaker. I can recommend having him presenting in Japan.

@emtiyaz @mathieualain thanks for the invitation, super appreciated!

Let's chat via Zoom one of these days? (or will you be at #NeurIPS2022 by any chance?)

@nolovedeeplearning @mathieualain not there but yes let’s try zoom sometimes (I tend to forget though so pardon me if I don’t get back right away)

@nolovedeeplearning @emtiyaz @mathieualain

Congrats to the NeurIPS tutorial! I would be delighted if I could join the list of presenters at a future tutorial on PCs.

@trappmartin @nolovedeeplearning @mathieualain yes congratulations! I wish I could go to NeurIPS but it is in the US and I don’t have visa. Hope we can get some of you to visit Japan sometime soon. Let me know if you or your collaborators are interested in visiting.