Sigh.

So it turns out we've mapped the neural connectome of Drosophila *and simulated it in silico*.

https://flywire.ai/

Pop-sci explainer here:

https://www.rathbiotaclan.com/whole-brain-emulation-achieved-scientists-run-a-fruit-fly-brain-in-simulation/

Key quote: "The step from a complete connectome to a working computational brain model is not trivial." And there's an even more important finding in this screenshot (alt text via OCR):

"The wiring is the computation".

/1

The experimenters then went on to hook up their Drosophila connectome to an anatomically detailed Drosophila body model within an open-source physics engine that "uses generalized coordinates and constraint-based contact dynamics to simulate rigid-body systems with high fidelity" including joint and antennae modeling and accurate modeling of surface adhesion—and compound eye simulation.

Lots of *really* interesting insights here.

/2

They managed to run a feedback loop between the full 127,400 neuron network in the biological connectome to the physical simulation, with feedback from proprioceptive signals received by the model "fly" in the simulation producing feedback spile trains in the simulation, and THEY GOT RESULTS (again, see alt text of screencap: it's too verbose for a toot):

/3

There is stuff missing, of course (alt text for screencap contains about 3 toots' worth of text explaining this): information about how the motor neurons connect to physical features of the body like the muscles, information on morphologically divergent neurons and fine detail on dendritic branching and synaptic inputs across dendritic compartments:

/4

... The next step on from Drosophila, the mouse brain, is 560 times larger—never mind a vastly more complex human brain. And to get the murine connectome we'll have to chop up *a lot* of brains: a human upload won't pass any kind of medical ethics review at this point!

But near-term, it's expected to yield "fundamentally new architectural principles for AI systems that are more sample-efficient, more robust, and more capable of behavioral generalization than current approaches"

/5

But I'm REALLY HAPPY right now because this kinda-sorta validates the key premise of the SF novel I just handed in last month (which involves serial reincarnation via destructive brain-slicing-and-imaging then imprinting onto an immature cortex, and then explores its disastrous societal failure modes).

... And it also hints that artificial consciousness might, eventually, be possible, if only via the hard path of doing it the same way we do it, only in simulation in silico.

/6 (ends)

@cstross very cool, thanks for sharing!
@mwl Also very cool, the Indian sci/tech news website that ran that feature! (From the writing style I initially thought it might be AI slop, but no: Indian English is just a bit different.)
@cstross @mwl this may not be a coincidence: many LLMs were trained by humans in English-speaking countries with lower labor costs, and some common wordings we associate with LLMs actually come from the variants of English spoken in those countries.
@pwassonchat @cstross @mwl

I'm not surprised by this at all

after getting asked to "please do the needful" by some indian clients at an old job on a bunch of emails I had to figure the origin of the phrase

Turns out it is a remament of old UK English that fell out of use elsewhere but still survives in Indian-English, as opposed to any sort of English as a second language grammatical "error", there were a bunch of other examples as well
@rachel @cstross @mwl @pwassonchat I actually wouldn't think that phrase especially odd (Uk boomer)
@annehargreaves @rachel @cstross @mwl I'm French and to me it sounds like someone tried to translate "[veuillez] faire le nécessaire" too literally/word to word. Maybe that's where the old English got it from ?
@rachel @cstross @mwl @pwassonchat "prepone" as an opposite of "postpone" is one of my favourite quirks of Indian English
@ansuz @cstross @mwl @pwassonchat I also saw people ask for "Updation and Deletion" of records and honestly yeah that works
@rachel @cstross @mwl @pwassonchat yes! I used to work on collaborative editing software in France, and heard "edition" quite often to describe the process of editing.

It still sounds a little weird to me, but it's perfectly logical.
@rachel @cstross @mwl @pwassonchat It's fun how lots of phrases in other English speaking places, and that we Brits sneer at as being "not proper English", turn out to be older than what we now use in the UK. Fall is older than Autumn, that kind of thing.
I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.

I'm calm. I'm calm. I promise.

this man's mind

@pwassonchat @cstross @mwl

I started learning English at 15. I ended up studying English first in college, later at Uni, where I got an MA in Linguistics and later a post-grad in PR and Effective Communication. I'm also autistic and, especially when copy writing, very detail-oriented.

Up until three years ago, I often received compliments for my writing. My uni essays from twenty years ago were packed with words and phrases that are now often flagged as AI.

In the past few years, I have been accused of using AI a few times. Apparently, writing well and knowing Oxford / AP Press punctuation rules are now considered a liability, not an asset.

I found myself actively dumbing down my writing a few times recently.

We created a system where sceptics dismiss genuine images, videos and articles as AI, while the gullible believe obvious fakes.

Carl Sagan was spot on with his predictions.

@cstross Oh, so that wasn't just me.

Between that and the crawler at the top I had to give up trying to read it. A shame, it seemed interesting.

@mwl

@solitha @mwl If you want to keep up with the sciences in future you're going to have to get used to Indian English, or even learn Mandarin.

@cstross @solitha

Indian English feels odd at first, but after a little practice it goes down easily. The more variants of a language you're familiar with, the more easily you add new ones.

@cstross @mwl Heh, well, guess I'm doomed to ignorance.

FWIW the writing itself was not an absolute block. The combo of crawler and writing (and maybe just being generally unfocused) had to all drag me down.

But, um, Mandarin... I'll have to wait for the paid journos to bring those to light.

It's all just as well, really. Breakthroughs today are not likely to see general application within the years I have left.

@cstross Agreed that artificial consciousness might be possible from the bottom up, starting with agency and a complete model.

I don't believe for a picosecond that current LLMs (or other AI) are conscious.

@future_upbeat

I absolutely agree.

At best, what current LLMs are is evidence that linguistic processing follows statistically modelable rules.

@cstross @future_upbeat

And that a facility with language is sufficient to bamboozle most people into perceiving it as thinking.

In spite of a total lack of *any* world modeling or logical processing.

@cstross Does that make your work Science Fact-ion instead of Science Fiction?

@cstross
Also since cryogenic freezing a brain destroys the structure of an already dead brain (basically deteriotated), the folk paying for that are being scammed.

I agree it's nice info for SF world building.

Presumably they'd have to replace the blood of a living mouse with a special fluid to preserve the structure?

@raymaccarthy Yes on the blood-replacement, which implies—awkwardly, for the human uploading fans—that doing this to a human would lay the experimenters open to murder charges.
@cstross Wait- so... I should get my brain frozen until they perfect the slicing and uploading to silicon to live eternally
@Antiqueight Naah, the ice crystals forming in your synapses would mush them into un-digitizable soup.
@cstross @Antiqueight one please ☝️
@shovemedia @cstross @Antiqueight Back around 2008 people thought you could just saturate cells with trehalose to survive dessication (by preserving the internal structure), but tardigrades use it in conjunction with special proteins: https://research.ucdavis.edu/research-inspired-by-water-bears-leads-to-innovations-in-medicine-food-preservation-and-blood-storage/
Research Inspired by ‘Water Bears’ Leads to Innovations in Medicine, Food Preservation and Blood Storage - Office of Research

One of the Crowes’ methods for freeze-drying liposomes (artificial “sacs” of phospholipid molecules that can deliver microscopic substances to cells) has had an enormous impact.The University of California licensed their patent, “Methods/Compositions for Preserving Liposomes,” to Vestar Research (later acquired by Gilead), and has been used in AmBisome®, an injectable therapy used for treating a deadly systemic fungal infection that afflicts immune-depressed patients.

@cstross You can tell I've kept up with the technology - they haven't resolved that yet??!?
@cstross Also one step closed to proving that we're likely living in a simulation.

@cstross
Certainly a more promising avenue towards AGI than stochastic parrots.

But then again, what they're doing here is copying a fly brain into a silicon black box and seeing what it does. The research has nothing to do with improving upon fly intelligence and immanentising the Fly Nerd Rapture.

#ai #llm

@mrundkvist @cstross please do not give the flybros any ideas…
@cstross Let us know when/where the book is published. It sounds fascinating.

@cstross It reminds me of something I read about 30 years ago by some Linux journalist about modelling part of the digestive ganglion of a lobster.

I wonder what happened to that guy? Not seem him in the Linux world in years...

@cstross I’d have to read the paper, but fundamentally, that doesn’t sound very different to what you’d find in Rumelhart & McClelland (now celebrating its 40th birthday!)
If they now have a complete model, it can be tested to see where it’s reducible to a simpler but logically identical connectome, and probably more interestingly, where that is not possible; that may point to a minimum level of complexity to encode certain general functions.

@cstross
Welp. More evidence for the "we don't know when to stop" hypothesis. It may take a while but I find it very hard to imagine a good outcome from that research path for society. It even scares me when people say stuff like this is "cool" or "interesting". To me, it's like, yes of course it is theoretically possible therefore we should not be trying to do it!

Profoundly depressing, in all honesty. I cannot get excited about this stuff.

@cstross
In some ways researching this kind of thing represents a really bad inclination we have as a species. We are so clever we forget to be human. We forget to treat each other as living beings, because we get too caught up in the details. We invent super clever ways of surveilling each other and forget to be nice and caring to our neighbours. We research how our brains work so we can build robot humans at some future point, rather than enjoying the magic of being alive.
@cstross
The two ways of thinking are not compatible for me. I know not everyone thinks that way, but I just can't combine the two mindsets and the further we move down these paths the bigger the divide seems.

@cstross
But I suppose I'm talking about myself really. I don't mean that a scientist researching this stuff can't be kind. I mean that to me, going down the rabbit hole of the technical details of how a creature's mind works is not compatible with treating the creature as a being.

I rescue flies if they get stuck in water. I hate this research.

@krnlg I get what you're saying here, treating all creatures as the ends rather than the means.

But consider how happy you'd be in a world full of the suffering that we've learned how to prevent.

I don't like it, but I accept the trade-off within ethical guidelines.

@cstross

@solitha @cstross I don't expect ethical guidelines to do very much, I suppose. Not ultimately, anyway. You can only prevent so much suffering by curing illness - after all, we all die eventually. I reckon we could prevent more suffering by having a humane and warm attitude to each other and to other creatures. I do accept that research in general has given us many good things. But.. well I think there's a limit to the benefits of certain paths of research, simply due to how we operate as humans
@cstross Kick a neuron out of place in the "Brain Scanning Transfer" and your Elon Musk digital clone becomes somebody else. Which, I mean, it could get you a worse person, but not a lot worse.
@Illuminatus @cstross I enjoy the thought of Dilbert Stark submitting to brain uploading only to find that due to lack of chemical modelling, he can no longer get high.
@oddhack @cstross Even better, he finds the Basilisk was waiting to torture him personally all along.

@cstross

Someone commented that now we've uploaded a fly brain it can eat virtual shit long after the rest of us are a distant memory.