I have a new queen. Lasius brevicornis. The queen is about 9mm Her nanitics are so tiny I'm going to cry. They are 1.5mm and transparent yellow.

They are smaller than the antennae on my Camponotus pennsylvanicus queen. Nanitics are the first workers in a colony and they are often smaller than the workers produced later when the colony is better established. But this is out of control. They are just so small.

They would make a fruit fly look like an elephant.

They still have six legs, tiny mandibles and tiny ant intentions and projects they are working on with their mother.

What do they have in their legs? One muscle fiber per joint?

They are so complex and tiny it's breaking my brain a little.

I don't understand why people aren't freaked about about this more often.

I've written program to make the legs of 3D models of ants walk in a realistic way. You control the legs in two groups of 3, the lower part of the limbs can self correct. Still I'm certain the program is too long to fit in their little heads. But, they are racing around protecting their mother like they are going to do something with mandibles thinner than one of my hairs.

@futurebird Oh, this reminds me of Braitenberg verhicles.

Maybe some analog trickery?

https://en.wikipedia.org/wiki/Braitenberg_vehicle

Braitenberg vehicle - Wikipedia

@futurebird One thing I've learned from playing around with digital electronics (at a very limited scale) is quite how much you can do with a very few transistors. I'm pretty sure that the same thing applies to neuronal(*) networks.

Don't think of it as code, think of it as hardware, and it becomes less unlikely to think of it being feasible on the limited hardware available.

(*) i.e. actual biological ones

@futurebird

You can fit a VERY long program in a VERY little head. I think the huge mistake computer scientists make when thinking about modeling brains is to think of the neuron as analogous to a transistor. Every neuron is a whole-ass computer and can do fairly sophisticated computation; the best transistor analog, I believe, is a channel protein in the neuronal membrane. (A couple nanometers wide.)

@stevegis_ssg @futurebird

Yes. Even worse, axons do logic computations locally and independently.

A generic neocortex neuron has hundreds of inputs and thousands of outputs (synapses).

A cubic milimeter of mouse cortex has about 7 friggin' kilometers of axon.

And information may be stored in the connections, not as weights.

Bottom line: the "neural network" idea is dead wrong on a plethora of levels.

But don't blame comp. sci. folks. We're willing to learn. It's the AI bros who ain't.

@djl @stevegis_ssg @futurebird FutureBird and Steve may not be as interested in this (if so, apologies) but, @djl , do you know about Liquid AI?

Came out of Daniela Rus’ robotics lab. They started thinking about the C.elegans nervous system (which has been completely mapped) of 312 neurons. Some differential equations later, they had a new neural architecture for their robots, with a quadcopter that could follow someone around campus using mere tens of neurons, dealing with out-of-training situtations.

This looks like the paper, which is probably googleable:

Robust flight navigation out of distribution with liquid
neural networks
Makram Chahine, Ramin Hasani, Patrick Kao, Aaron Ray, Ryan Shubert, Mathias Lechner, Alexander Amini, Daniela Rus

(I think Hasani and Lechner are the best names for a google scholar search.)

(I haven’t read the paper, only heard Rus describe her students’ work.)

@lain_7 @stevegis_ssg @futurebird

My interests lie in figuring out how to do symbolic computation right. And having fun throwing stink bombs at folks who obviously have it wrong. (So I read a bit from the neuroanatomy folks.)

That work was also in an April 2023 Science (so I missed it, oops), and it looks like there was/will be a Robotics special issue of Science some time this month. The mail's being slow, so I haven't seen it yet.

@djl @stevegis_ssg @futurebird Not to mention that it's looking more and more like ignoring astrocytes and microglia in favor of neurons misses out on pretty much all the important details of how learning happens *and* what moods are.

I saw some recent research with mice suggesting that astrocytes might release adenosine as a neuromodulator to trigger boredom/futility, i.e. "change your strategy, you're wasting your time". AIUI it was already known that adenosine buildup seems to correlate with sleepiness, but apparently it also has a more short-term significance.

@djl @stevegis_ssg @futurebird I agree that transistors arranged into boolean logic gates is unlikely to be an analogous structure to most neural structures, but I am skeptical that ion channels in axons are. Most of them are voltage gated and open in sequence to propagate an action potential down a membrane. Ligand gated ones near the cell bodies would make more sense to me, but stuff like antidromic conduction existing at all makes me skeptical that axons can compute anything like a CPU.

@Wharrrrrrgarbl @stevegis_ssg @futurebird

"But stuff like antidromic conduction existing at all makes me skeptical that axons can compute anything like a CPU."

Agreed: I don't think neurons do CPU like things. But axons do compute _simple logic functions_ locally along the axons, and these places can generate action potentials.

I'm concerned with older, established stuff that demonstrates problems with "neural network" models. Newer and/or speculative work isn't stuff I can comment on.

@djl @stevegis_ssg @futurebird I guess I object even to the use of the word "logical function" on a subcellular level - in the sense of "one input, one output", biochemical systems are so squishy and wet that the best you can do is "one input, more of one output than the others most of the time". Maybe it's a distinction without a difference depending on what scale you're looking at.

@Wharrrrrrgarbl @djl @futurebird

A LOT more computation happens in the dendrites, although axonic computation does exist. Logic gates mostly aren't "one input, one output" (AND and OR gates, e.g.) and individual ion channels in the dendritic tree integrate inputs and calculate output that becomes input to subsequent channels in much the way that logic gates do.