Joao Barbosa

@barbosa
905 Followers
366 Following
116 Posts

I am a computational neuroscientist at Ecole Normale Supérieure, Paris. I am interested in working memory, decision making and all things low-rank, including recurrent neural networks and interactions between brain regions.

I plan to post mostly about the brain and machine learning but occasionally about world politics.

homepagehttps://jmourabarbosa.github.io

@cstross

I don't know if it's Guardian specifically, but there are a weirdly large number of outlets (including the Guardian) claiming that the far right have taken over the european parliament.

The actual results show the number of far right MEPs has barely changed, but no one is reporting this for some reason.

EDIT: Yes the far right did gain seats in some countries, but they lost seats in others. For whatever reason, the countries where the far right vote collapsed aren't being reported as much as the ones where they gained.

(SOURCE: European Union election results page at https://results.elections.europa.eu/en/european-results/2024-2029/)

2024 Election results | 2024 European election results | European Parliament

Official results of the 2024 European elections.

2024 European election results

Lol Neil Gaiman says

"ChatGPT doesn't give you information. It gives you information-shaped sentences."

This is one of the better ones I have seen.

Good morning! It’s the first Tuesday in February, and so you’re all invited to look through Wikipedia’s List of common misconceptions (https://en.wikipedia.org/wiki/List_of_common_misconceptions) per xkcd custom.

I'm so glad Tawana Petty was here but I'm sure it wasn't fun at all.

"And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And they’re talking about these futuristic possible risks, when we have so many risks that are happening today."

https://www.democracynow.org/2023/6/1/ai_bengio_petty_tegmark

Artificial Intelligence “Godfathers” Call for Regulation as Rights Groups Warn AI Encodes Oppression

We host a roundtable discussion with three experts in artificial intelligence on growing concerns over the technology’s potential dangers. Yoshua Bengio, known as one of the three “godfathers of AI,” is a professor at the University of Montreal and founder and scientific director at Mila–Quebec AI Institute. Bengio is also a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments. He is joined on Democracy Now! by Tawana Petty, the director of policy and advocacy at the Algorithmic Justice League, an organization dedicated to raising awareness about the harms of AI, particularly its encoding of racism, sexism and other forms of oppression, and by Max Tegmark, a professor at MIT and president of the Future of Life Institute, which aims to address the existential risk of AI upon humanity.

Democracy Now!

Happy to let the world know that we just published a new paper!

Please take a look if you are interested in context-dependent decision making, across-area interactions and low-rank RNN 👇

https://www.nature.com/articles/s41467-023-42519-5

Early selection of task-relevant features through population gating - Nature Communications

How the brain selects relevant information in complex and dynamic environments remains poorly understood. Here, the authors reveal that distinct neural populations in rat auditory cortex gate stimuli based on context, which could be facilitated by top-down signals from the prefrontal cortex.

Nature

So this is interesting...

Washington Post reporter Taylor Lorenz @taylorlorenz, who is famous in these circles for being banned from Twitter then reinstated after much outcry, has conducted an experiment to determine if view counts on Twitter are legitimate.

Surprise, she presents proof that they are not, as shown in the Tweet screenshot below.

Many stay on Twitter because they believe that their "engagement" is higher, but now it seems like they can not trust the numbers.

#twittermigration

We are super happy to announce the 2023 edition of our School of Ideas in Neuroscience! One week with fantastic speakers and engaging discussions about theories in neuro, AI, and neurophilosophy!
https://nenckiopenlab.org/school-of-ideas-2023/

Registration open!

Speakers include @PessoaBrain @gregorykohn @MilekPl and not on mastodon (as far as I know): John Krakauer, Adrienne Fairhall, Nedah Nemati, Antonella Tramacere, Pamela Lyon, Aikaterini Fotopoulou, Nicolai Waniek, Carina Curto, Wiktor Młynarski & Kate Nave

Nencki Open Lab

@barbosa @lowrank_adrian @adel @ShahabBakht

I would call it smth like "Few-shot contextual inference". In the brain, there would be a higher order region which infers context from the prompt. Then it modulates the LLM "top-down" to do the right thing. No "learning" is required here. The two networks are just doing their jobs, and helping each other. The HO region uses working mem., so no learning (reversible). If the HO appeals to long-term mem., like episodic mem, then learning occurs.

Prioritizing flexible working memory representations through retrospective attentional strengthening

https://www.sciencedirect.com/science/article/pii/S1053811923000502

Hebbian deep learning!

"an algorithm that trains deep neural networks, without any feedback, target, or error signals. As a result, it achieves efficiency by avoiding weight transport, non-local plasticity, time-locking of layer updates, iterative equilibria, and (self-) supervisory or other feedback signals (...) Its increased efficiency and biological compatibility do not trade off accuracy compared to state-of-the-art bio-plausible learning, but rather improve it."

https://arxiv.org/abs/2209.11883

Hebbian Deep Learning Without Feedback

Recent approximations to backpropagation (BP) have mitigated many of BP's computational inefficiencies and incompatibilities with biology, but important limitations still remain. Moreover, the approximations significantly decrease accuracy in benchmarks, suggesting that an entirely different approach may be more fruitful. Here, grounded on recent theory for Hebbian learning in soft winner-take-all networks, we present multilayer SoftHebb, i.e. an algorithm that trains deep neural networks, without any feedback, target, or error signals. As a result, it achieves efficiency by avoiding weight transport, non-local plasticity, time-locking of layer updates, iterative equilibria, and (self-) supervisory or other feedback signals -- which were necessary in other approaches. Its increased efficiency and biological compatibility do not trade off accuracy compared to state-of-the-art bio-plausible learning, but rather improve it. With up to five hidden layers and an added linear classifier, accuracies on MNIST, CIFAR-10, STL-10, and ImageNet, respectively reach 99.4%, 80.3%, 76.2%, and 27.3%. In conclusion, SoftHebb shows with a radically different approach from BP that Deep Learning over few layers may be plausible in the brain and increases the accuracy of bio-plausible machine learning. Code is available at https://github.com/NeuromorphicComputing/SoftHebb.

arXiv.org