Now Chris Williams is presenting a principled probabilistic way to train #capsule networks (originally by @geoffreyhinton
) as generative models.
A variational approach to train generative part-based generative models.
Now Chris Williams is presenting a principled probabilistic way to train #capsule networks (originally by @geoffreyhinton
) as generative models.
A variational approach to train generative part-based generative models.
@mathieualain not true, the biggest benefit is having a clear part based decomposition.
The more structure, the less examples you need
Totally agree!
Looking forward to hear from you @emtiyaz on this recent paper where you can have a #differentiable #layer that encodes #logical #constraints and provably #guarantees that the predictions of a #neural net satisfy the constraint!
By making everything modular and compartimentalized (plug&play) you can easily integrate #symbolic and #neural #reasoning
@nolovedeeplearning @mathieualain thanks sharing. Itās a beautifully written paper. Congratulations!
I wish I could understand sec 3 better. Specifically how PC enable tractable inference, compared to eg decomposable PGMs (used to work on that earlier). I hope someday I get time to do it. Is there a tutorial somewhere from people like me who know PGMs well?
tl;dr: classical #PGMs can be compiled into #circuits #PCs and then #overparameterized to increase #expressiveness but retaining #tractability.
A gentle introduction for #probabilistic and #PGM folks is this paper https://web.cs.ucla.edu/~guyvdb/papers/ProbCirc20.pdf
A companion #video #tutorial is here https://www.youtube.com/watch?v=2RAG5-L9R70
I will do a new version of this tutorial with YooJung Choi and Robert Peharz at #NeurIPS2022 in a month!
Showing latest advancements in #reliable #inference with #PCs!
@mathieualain @emtiyaz computing marginals can be done in a single #feedforward pass of the #circuit!
another way to see what a circuit coming from a #PGM is: a data #structure that compactly encodes the trace of the sum-product algorithm. #dynamic #programming
@mathieualain @emtiyaz lots of!
For example, integrating over arbitrary polytopes can be hard.
Or even compiling a #PGM might result in a circuit whose size is worse case exponential in the treewidth of the #PGM
Note that you can still bound the treewidth and increase expressiveness by a richer parametrization (more #discrete #latent #variables)
@nolovedeeplearning @mathieualain thanks. Just read the intro and it sounds magical. I had always thought that it should be possible to convert intractable PGMs into much larger tree-like Graphs where linear time inference is possible. And it sounded like that to me and I want to learn more.
If you have time please come to Japan to visit and teach us more. I think using PCs as posterior candidate is a straightforward idea which I have been dreaming for many years.
@emtiyaz @mathieualain thanks for the invitation, super appreciated!
Let's chat via Zoom one of these days? (or will you be at #NeurIPS2022 by any chance?)
@nolovedeeplearning @emtiyaz @mathieualain
Congrats to the NeurIPS tutorial! I would be delighted if I could join the list of presenters at a future tutorial on PCs.