I have a new queen. Lasius brevicornis. The queen is about 9mm Her nanitics are so tiny I'm going to cry. They are 1.5mm and transparent yellow.

They are smaller than the antennae on my Camponotus pennsylvanicus queen. Nanitics are the first workers in a colony and they are often smaller than the workers produced later when the colony is better established. But this is out of control. They are just so small.

They would make a fruit fly look like an elephant.

They still have six legs, tiny mandibles and tiny ant intentions and projects they are working on with their mother.

What do they have in their legs? One muscle fiber per joint?

They are so complex and tiny it's breaking my brain a little.

I don't understand why people aren't freaked about about this more often.

I've written program to make the legs of 3D models of ants walk in a realistic way. You control the legs in two groups of 3, the lower part of the limbs can self correct. Still I'm certain the program is too long to fit in their little heads. But, they are racing around protecting their mother like they are going to do something with mandibles thinner than one of my hairs.

@futurebird

You can fit a VERY long program in a VERY little head. I think the huge mistake computer scientists make when thinking about modeling brains is to think of the neuron as analogous to a transistor. Every neuron is a whole-ass computer and can do fairly sophisticated computation; the best transistor analog, I believe, is a channel protein in the neuronal membrane. (A couple nanometers wide.)

@stevegis_ssg @futurebird

Yes. Even worse, axons do logic computations locally and independently.

A generic neocortex neuron has hundreds of inputs and thousands of outputs (synapses).

A cubic milimeter of mouse cortex has about 7 friggin' kilometers of axon.

And information may be stored in the connections, not as weights.

Bottom line: the "neural network" idea is dead wrong on a plethora of levels.

But don't blame comp. sci. folks. We're willing to learn. It's the AI bros who ain't.

@djl @stevegis_ssg @futurebird I agree that transistors arranged into boolean logic gates is unlikely to be an analogous structure to most neural structures, but I am skeptical that ion channels in axons are. Most of them are voltage gated and open in sequence to propagate an action potential down a membrane. Ligand gated ones near the cell bodies would make more sense to me, but stuff like antidromic conduction existing at all makes me skeptical that axons can compute anything like a CPU.

@Wharrrrrrgarbl @stevegis_ssg @futurebird

"But stuff like antidromic conduction existing at all makes me skeptical that axons can compute anything like a CPU."

Agreed: I don't think neurons do CPU like things. But axons do compute _simple logic functions_ locally along the axons, and these places can generate action potentials.

I'm concerned with older, established stuff that demonstrates problems with "neural network" models. Newer and/or speculative work isn't stuff I can comment on.

@djl @stevegis_ssg @futurebird I guess I object even to the use of the word "logical function" on a subcellular level - in the sense of "one input, one output", biochemical systems are so squishy and wet that the best you can do is "one input, more of one output than the others most of the time". Maybe it's a distinction without a difference depending on what scale you're looking at.

@Wharrrrrrgarbl @djl @futurebird

A LOT more computation happens in the dendrites, although axonic computation does exist. Logic gates mostly aren't "one input, one output" (AND and OR gates, e.g.) and individual ion channels in the dendritic tree integrate inputs and calculate output that becomes input to subsequent channels in much the way that logic gates do.