42 Followers
68 Following
17 Posts

Research Engineer@Huawei Research Vancouver. Thinking about prob programming&structure discovery recently.

Hobby: typography (in particular CJK fonts)

websitehttps://luxxxlucy.github.io

We're starting a community slack for anybody interested in Neurosymbolic AI. (drivers include organizers of the annual workshop on the topic, and EiCs and EB members of the journal Neurosymbolic Artificial Intelligence that we're currently starting.
If you'd like to be on the slack, let me know (or anybody else who's already on it). You'll get an invite to your email then.

@tarekbesold @ArturGarcez @nesy_ai

However, the general circuit induction problem poses a more serious problem where no tricks I experimented provided any promising results. Essentially this is because it is optimizing a computation graph that is always changing in a non-smooth way, and the gradient just becomes to noisy to be useful.
A special parity chain task is discussed where relaxation approach is bound to fail when the problem size grows (as the number of local minima grows exponentially with the problem size).
Some tricks can do some help here (so we can solve this parity chain task reliably) (BTW the optimization process is also quite interesting visually)

Updated some new thoughts regarding the TerpreT problem and my naive solution

https://luxxxlucy.github.io/projects/2021_terpret/index.html

The original TerpreT paper(https://arxiv.org/abs/1608.04428)
discussed solving program induction by gradient based optimization(after making the program differentiable by relaxation ).

#probprog #programsynthesis #neuralnetwork #deeplearning #NeuroSymbolic

Papers accepted at #Neurips are now visible in OpenReview https://openreview.net/group?id=NeurIPS.cc/2022/Conference
NeurIPS 2022 Conference

Welcome to the OpenReview homepage for NeurIPS 2022 Conference

OpenReview

@emtiyaz @mathieualain

tl;dr: classical #PGMs can be compiled into #circuits #PCs and then #overparameterized to increase #expressiveness but retaining #tractability.

A gentle introduction for #probabilistic and #PGM folks is this paper https://web.cs.ucla.edu/~guyvdb/papers/ProbCirc20.pdf

A companion #video #tutorial is here https://www.youtube.com/watch?v=2RAG5-L9R70

I will do a new version of this tutorial with YooJung Choi and Robert Peharz at #NeurIPS2022 in a month!

Showing latest advancements in #reliable #inference with #PCs!

The slides and recording of our COLING 2022 tutorial on Neuro-Symbolic #NLProc are now available at: https://ns4nlp-coling.github.io

This is joint work with Dan Roth, Yejin Choi, Vivek Srikumar, Dan Goldwasser and Sean Welleck, all of whom I don't know how to find/tag on Mastodon :)

NS4NLP: Neuro-Symbolic Modeling for NLP

NS4NLP Coling 2022 Tutorial

NS4NLP: Neuro-Symbolic Modeling for NLP

Does anyone happen to know any mail-list/forum/discussion group for #neurosymbolic AI? I believe it is accumulating momentum gradually these days, yet I am having hard times finding relevant works.

Or perhap it is because the nature of neurosymbolic AI is just too heterogeneous, ill-defined and painfully diverse?

I am a #Lecturer (Assistant Prof) in #ML at the University of #Edinburgh.

I research how #deep #probabilistic #models can be #reliable and can do #complex #reasoning with #guarantees in the real world and with #constraints.

Will toot about #automating #tractable #neuro #symbolic #AI and its #hype, #CS and #probabilistic #programming but also anything from #music to #politics and #fightsforrights.

My apologies in advance!

#introduction

@roydanroy
Mine is Crafting Interpreters by Robert Nystrom