dileeplearning

468 Followers
173 Following
62 Posts
AGI Research @DeepMind. Ex co-founder & CTO of both Vicarious AI (acqd by Alphabet) and Numenta. Triply EE. BTech IIT Mumbai, MS&PhD Stanford.
Www.dileeplearning.com

If you are into hippocampus, PFC, cognitive maps etc. you’d have encountered Successor Representations (SR). But often SR is misunderstood, and ascribed properties it doesn’t have. In this blog I describe some of the limitations of SR as a model of cognitive maps.

https://blog.dileeplearning.com/p/a-critique-of-successor-representations

A critique of successor representations as a model of learning in the hippocampus

Successor representations (SR) is a popular, influential, and often cited model of place cells in the hippocampus.

Artificial General Ideas
If civil engineering was like AI ... #AGIComics

Have a look at our recent paper, and the associated blog to change how you think about hippocampus. https://www.science.org/doi/10.1126/sciadv.adm8470
"Space is a latent sequence: A theory of the hippocampus"

The blog explains, with animated visuals, how interpreting sequential responses of hippocampal neurons in spatial/Euclidean terms is problematic.

https://blog.dileeplearning.com/p/space-is-a-sensory-motor-sequence

Planning to participate in the ARC price to win a million bucks? Read this blog first!

https://blog.dileeplearning.com/p/how-to-get-a-million-bucks-quick

How to get a million bucks: quick thoughts on cognitive programs and the ARC challenge.

The abstract reasoning corpus (ARC) challenge by Francois Chollet has gained renewed attention due to the 1M prize announcement. This challenge is interesting to me because the idea of “abstraction” as “synthesizing cognitive programs” is something my team has worked on and

Artificial General Ideas
#AGIComics has finally figured out its investment strategy...
Thrilled to share what I’ve been working on for the last two years - a new way to solve one of the most fundamental problems in quantum physics, computing excited states! https://arxiv.org/abs/2308.16848
Natural Quantum Monte Carlo Computation of Excited States

We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of finding excited states of a given system into that of finding the ground state of an expanded system. Expected values of arbitrary observables can be calculated, including off-diagonal expectations between different states such as the transition dipole moment. Although the method is entirely general, it works particularly well in conjunction with recent work on using neural networks as variational Ansatze for many-electron systems, and we show that by combining this method with the FermiNet and Psiformer Ansatze we can accurately recover vertical excitation energies and oscillator strengths on molecules as large as benzene. Beyond the examples on molecules presented here, we expect this technique will be of great interest for applications of variational quantum Monte Carlo to atomic, nuclear and condensed matter physics.

arXiv.org
#AGIComics discovers why philosophers leave a vacuum when it comes to consciousness...
Do LLMs understand? Check out my post where I explain why they don't understand or have common sense like humans, and discuss what are the essential ingredients of human-like understanding. https://dileeplearning.substack.com/p/ingredients-of-understanding
Ingredients of understanding

Thoughts on how human understanding is different from LLM "understanding"

Artificial General Ideas

Cool post by Dileep George (@dileeplearning) pointing out that there likely are fundamental limits on what we can achieve by scaling up LLMs, and introducing a useful metaphor: scaled up zeppelins vs. underperforming airplanes in the early 20th century.

https://dileeplearning.substack.com/p/welcome-to-the-exciting-dirigibles-500

Welcome to the exciting dirigibles era of AI

Notes for navigating large language models and beyond...

Artificial General Ideas
I started new blog! Check out my first post on keeping a longer term perspective on AI amidst the excitement about the possibilities of language models. Consider subscribing/sharing if you like the content. https://dileeplearning.substack.com/p/welcome-to-the-exciting-dirigibles-500?utm_campaign=auto_share
Welcome to the exciting dirigibles era of AI

Notes for navigating large language models and beyond...

Artificial General Ideas