Lol Neil Gaiman says
"ChatGPT doesn't give you information. It gives you information-shaped sentences."
This is one of the better ones I have seen.
I am a computational neuroscientist at Ecole Normale Supérieure, Paris. I am interested in working memory, decision making and all things low-rank, including recurrent neural networks and interactions between brain regions.
I plan to post mostly about the brain and machine learning but occasionally about world politics.
| homepage | https://jmourabarbosa.github.io |
I don't know if it's Guardian specifically, but there are a weirdly large number of outlets (including the Guardian) claiming that the far right have taken over the european parliament.
The actual results show the number of far right MEPs has barely changed, but no one is reporting this for some reason.
EDIT: Yes the far right did gain seats in some countries, but they lost seats in others. For whatever reason, the countries where the far right vote collapsed aren't being reported as much as the ones where they gained.
(SOURCE: European Union election results page at https://results.elections.europa.eu/en/european-results/2024-2029/)
Lol Neil Gaiman says
"ChatGPT doesn't give you information. It gives you information-shaped sentences."
This is one of the better ones I have seen.
I'm so glad Tawana Petty was here but I'm sure it wasn't fun at all.
"And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And they’re talking about these futuristic possible risks, when we have so many risks that are happening today."
https://www.democracynow.org/2023/6/1/ai_bengio_petty_tegmark
We host a roundtable discussion with three experts in artificial intelligence on growing concerns over the technology’s potential dangers. Yoshua Bengio, known as one of the three “godfathers of AI,” is a professor at the University of Montreal and founder and scientific director at Mila–Quebec AI Institute. Bengio is also a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments. He is joined on Democracy Now! by Tawana Petty, the director of policy and advocacy at the Algorithmic Justice League, an organization dedicated to raising awareness about the harms of AI, particularly its encoding of racism, sexism and other forms of oppression, and by Max Tegmark, a professor at MIT and president of the Future of Life Institute, which aims to address the existential risk of AI upon humanity.
Happy to let the world know that we just published a new paper!
Please take a look if you are interested in context-dependent decision making, across-area interactions and low-rank RNN 👇
How the brain selects relevant information in complex and dynamic environments remains poorly understood. Here, the authors reveal that distinct neural populations in rat auditory cortex gate stimuli based on context, which could be facilitated by top-down signals from the prefrontal cortex.
So this is interesting...
Washington Post reporter Taylor Lorenz @taylorlorenz, who is famous in these circles for being banned from Twitter then reinstated after much outcry, has conducted an experiment to determine if view counts on Twitter are legitimate.
Surprise, she presents proof that they are not, as shown in the Tweet screenshot below.
Many stay on Twitter because they believe that their "engagement" is higher, but now it seems like they can not trust the numbers.
We are super happy to announce the 2023 edition of our School of Ideas in Neuroscience! One week with fantastic speakers and engaging discussions about theories in neuro, AI, and neurophilosophy!
https://nenckiopenlab.org/school-of-ideas-2023/
Registration open!
Speakers include @PessoaBrain @gregorykohn @MilekPl and not on mastodon (as far as I know): John Krakauer, Adrienne Fairhall, Nedah Nemati, Antonella Tramacere, Pamela Lyon, Aikaterini Fotopoulou, Nicolai Waniek, Carina Curto, Wiktor Młynarski & Kate Nave
@barbosa @lowrank_adrian @adel @ShahabBakht
I would call it smth like "Few-shot contextual inference". In the brain, there would be a higher order region which infers context from the prompt. Then it modulates the LLM "top-down" to do the right thing. No "learning" is required here. The two networks are just doing their jobs, and helping each other. The HO region uses working mem., so no learning (reversible). If the HO appeals to long-term mem., like episodic mem, then learning occurs.
Prioritizing flexible working memory representations through retrospective attentional strengthening
https://www.sciencedirect.com/science/article/pii/S1053811923000502