2026 is officially the year the "Black Box" opened. 🔓 From gigawatt-scale AI data centers to 30% of code being written by agents like Claude Code, the frontier has moved. We aren't just using tech; we're living in its architecture. Which one scares you most? 🤖⚡️ #AI2026 #TechTrends #FutureShock

@screwlisp One really interesting, and counterintuitive, thing about neural networks is, a lot of the decisions that can seem important to an engineer from the Before side don't actually appear to matter too much; they can be safely made in a number of somewhat different ways, and the network can stil work pretty much the same way. (Obviously, it'll have to be trained for its distinct architecture, but it can be trained on the same data, and it will work largely the same way.)

This weird phenomenon is one of the reasons why many people suspect that the things we particularly associate with human brains, most significantly, the subjective consciousness, might be able to emerge in a variety of networks of rather variable architectures, provided that their elements have certain foundational properties and that the networks are large enough.

We don't quite know what the critical properties are until we get there, though. My hunch is, the artificial neurons we currently have might be sufficient, but we're probably at least six orders of magnitude of computational capacity away from a primate-like CNS to become feasible to emulate. We might need less neurons if we made them more complicated, or possibly, if we figured out the how and why of neuronal migration in vertebrate brains.

OTOH, there's some very interesting kinds of non-vertebrate brain architectures in the nature, architectures that are much more efficient in their use of neurons. My favourite example is jumping spiders. For some species, it can be experimentally proven that they can process input comprising of millions of bits, and solve complex problems as the ethologists understand the concept, in brains comprising of only a couple tens of thousands of neurons. A couple of species have brains of less than ten thousand neurons, and still do complex behaviours.

It is not yet known how they do that, but it seems likely that mammalian brains can not do what jumping spiders do with the same neuron count. In part, well, because scientists can actually grow slices of rat brains on silicon, and we have some hunch about the complexity-of-behaviour-density that these can reach. The critical difference is not necessarily in the architecture of individual neurons, though; it is possible that the jumping spider brains have more detailed genetic architectures whereas mammalian brains have kind of been optimised for generality, with relatively few genetically built-in specific patterns. This high degree of flexibility is likely relatively rather wasteful; we only have it because dinosaurs without it used to die of #FutureShock when the world started to relatively rapidly change.

We understand some basics of how genes encode, and implement the general body plans of creatures. The best-understood part of this is the Hox, or Homeobox, gene network; it exists on pretty much all Terran creatures with a bilateral body symmetry at least in some part of their l ife cycle (there's some creatures that are only temporarily bilateral), and the fundamentals are very highly preserved. Somewhat simplifiedly, on the longitudional body axis, the body plan develops as a sort of chemical interference pattern, with genes to build individual organs, in the first approximation, activating on the basis of very specific ratios of growth factor protein levels.

It seems likely that some basic brain structures are encoded in somewhat similar ways. We do see the Hox genes' involvement in the development of the neural tube, but scientists current understanding of how this affects different brain architectures is fairly limited. #MoreResearchIsNeeded.

Some other interesting invertebrate creatures with much-more-efficient-than-mammalian brains are some molluscs, particularly octopi, and praying mantises.

As other vertebrates go, birds have brains very different from mammals, of (very roughly) comparable neuron counts, but synaptically significantly denser, and organised so differently that only a couple of decades ago, some neurologists would, with straight faces, argue that birds can't think since they don't have neocortices. Well, turns out, some birds manage to think well enough without a neocortex, and thinking that one is required for thinking is effectively an exercise in mammalian chauvinism. But we understand avian intelligence even worse than we understand mammalian one, and mammalian intelligence we understand very poorlly to start with.

On the third hand, the human way of growing brains that can do language appears to boil down to a very small number of specific 'root' gene alleles. Of the known ones, FOX2P is the most likely one involved; the most likely one to distinguish human speech from other apes' linguistic ability. Knocking it out in humans is associated with specific cognitive and linguistic defects; transgenic mice with human FOX2P become very 'chatty' (but, well, we can't yet tell if there's meaning in their chatter). We don't know what a transgenic chimpanzee with human FOX2P might sound like; scientists could arrange one, but ethicists are concerned as to whether it should be done.

A catch is, the FOX2P protein is not anything directly structural; it's a transcription factor. It up- and down-regulates dozens, perhaps hundreds, of other genes' expression. It's probably involved in representing detailed brain structure through some combination of chemical interference patterns that we can't yet interpret.

But the potential fact that a relatively small change to a high-level control gene might be able to turn complex speech capability on and off tantalisingly suggests that understanding how this works might allow ANNs to do 'true' speech, not the stochastic parroting that LLMs do.

On the fourth hand, maybe we're understanding it wrong, and what human FOX2P does is structurally what LLMs do, and the problems of LLM parroting are just that LLMs are missing other crucial parts of brains needed for cognition. Maybe LLMs would be smarter if they had neocortices? Pity that nobody knows how to build one.

Hallucinating up things that should come from parts of brain that are missing, unavailable, or knocked off, is a known phenomenon in biological brains, after all. Based on what we know, this is likely one of these emergent phenomena of Sufficiently Complex Neural Networks that LLMs and biological brains do in a relativel similar way. In clinical neurology, it's called 'confabulation'; one of the most striking examples is the Anton–Babinski syndrome in which case a person is blind because of brain damage, but the damaged visual cortex interface confabulates up enough of fake visual input that the patient adamantly and genuinely believes that they can see, even though they can't. (Confusingly, because doctors don't think like engineers, the syndrome can also cover situations in which a patient does not necessarily feel they can see, but argues it anyway, as long as they seem to believe their confabulated reasoning for why they can see even though they keep failing vision tests.) The full syndrome in one of its two main 'pure' presentations is statistically rare, but a curiously recurring condition associated with focal damage to specific parts of the visual cortex, and possibly subcortical layers. (Doctors have mapped out the specific regions whose damage can cause it, but because it's a rare condition, we don't know too much about the specific kind of variance that differentiates between Anton—Babinski and the kind of vision loss that a patient can clearly perceive.)

A well-known example of brain confabulating up visual input is the invisibility of the macula lutea. Right in the middle of eyed vertebrates' visual field — not at the very centre, but usually close to the centre — is a region where the optical nerve attaches, and shadows a substantial part of the field of vision. Yet, virtually all seeing humans' brains are inherently configured to not see that hole in the field of vision, and to just Make Something Up(tm) when trying to peek into that part; the mechanism this works by is the very same confabulatory expansion of patterns. We don't quite know for sure, but based on what we do know, this phenomenon is likely universal among vertebrates with eyes.

The SCARIEST chart in AI
Visualizing the exponential.

This data-driven look at compute-to-intelligence ratios suggests we are entering a phase of self-improving code that exceeds all previous human projections for 2030.

#AIData #TechStatistics #FutureShock #AIResearch #TechNews #ExponentialGrowth

https://www.technology-news-channel.com/the-scariest-chart-in-ai/

The SCARIEST chart in AI

The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers[...]

Technology News

Typical spot the differences in a future where nobody cares about censoring #nudity/#genitals...

"This man has a #penis in this one, but in the other one he has a #Vulva."

#futurism #FutureShock

@future_shock.ai:
The Pentagon blacklisted Anthropic for having safety guardrails. Then OpenAI got the same deal with the same guardrails. Vonnegut would have drawn a butthole about it. We wrote about Murderbot instead.

https://news.future-shock.ai/murderbots-and-mass-surveillance/

#AI #AISafety #FutureShock

Murderbots and Mass Surveillance

When the Pentagon blacklists an AI company for having safety guardrails, science fiction stops being fiction. Martha Wells, Arthur C. Clarke, and Alastair Reynolds saw this coming.

Future Shock Newsletter
"Rockit" is a composition recorded by American jazz pianist #HerbieHancock #and produced by #BillLaswell and #MichaelBeinhorn. Hancock released it as a #single from his twenty-ninth album, #FutureShock (1983). The selection was composed by Hancock, Laswell, and Beinhorn. The track was driven by its deejay #scratch style, performed primarily by #DXT, and its music video created by #GodleyAndCreme, featuring the robotic art of #JimWhiting.
https://www.youtube.com/watch?v=GHhD4PD75zY
Herbie Hancock - Rockit (Official Video)

YouTube

“Every generation before us was also convinced they were living at the end.
“When books arrived.
“When electricity arrived.
“When the internet arrived.
“Each time, something did end.
“But the world didn’t.
“It changed.
“And then, inconveniently, it carried on.”

https://www.pootlepress.com/2026/02/apocaloptimist/ #ApocalypsePlease #AgeOfAnxiety #ApocalypticThiking #FutureShock

APOCALOPTIMIST | Pootlepress

Or: Why Humans Are Absolutely Certain Everything Is About to End (Again) Human beings have many wonderful qualities. We invented sandwiches. We domesticated dogs. We created the ability to watch a man eat 14 cheeseburgers on YouTube while we ourselves eat a salad and feel morally superior. But perhaps our greatest achievement is this: We […]

Part 6 is here, bringing you the first crisis on board the starship Symbiosis. And believe me, it won’t be the last . . . 🥸👽🤖🌚

https://jetse.substack.com/p/the-three-reflectors-of-consensual-506

#sciencefiction #Novel #spaceopera #consciousness #futureshock

10 Shocking AI Predictions for 2026
From the collapse of traditional social media to AI-governed cities—these ten predictions for 2026 are setting the internet on fire.

#AI #tech #2026 #TechTrends #FutureShock #News

https://www.technology-news-channel.com/10-shocking-ai-predictions-for-2026-that-break-the-internet/

10 Shocking AI Predictions For 2026 That Break The Internet

The AI future is arriving faster than anyone expected — and 2026 will change everything. What sounds impossible today could[...]

Technology News
🧠🌏🧬 In a world jittering with heat waves and shrinking futures, legacy gets debugged. One father seeks immortality in grandchildren. A Tokyo anti natalist sees duty in refusal. Between extinction math and family longing, survival becomes a design problem. What persists is not blood, but consequence. #FutureShock https://www.sapiens.org/culture/japan-reproduction-death-anti-natalism-movement/