Donal Fellows

7 Followers
4 Following
35 Posts

Research Software Architect for Research Software Engineering at the University of Manchester.

Tcl programmer, but also many other techs too. Lots of software automation engineering these days.

I want real AGI, not fake stuff powered by overtrained LLMs.

Best LanguageTcl
Best ProjectSpiNNaker

Oh my aching heart…

#Tax #UK #TaxAvoidance

@felix They've got quite different value semantics. In Lisp, you can mutate objects and other things that have hold of the object will see the mutations. In Tcl, values are non-mutable (unlike variables), which has some pretty deep ripple effects.

Also, Tcl's a lot less worried about linguistic purity. Embedding another language (SQL, C, Fortran, etc.) inside Tcl is considered good practice.

@Setok It's missing the "FLASHING EXTRA BRIGHT, to let the Aliens from Vega know where to land, my god I can see my skeleton projected on the walls in ash" mode.

Lucky.

@Fishercat

I live in a country (Ro) that went from a horrific Stalinist dictatorship to being a self titled 'original democracy' to being a full EU member state, and all these in only one generation.

And now I realize the more democratic the country became, the safer protests became.

From machine gun shooting, to isolated incident, to being safe to take a kid to a protest.

@Remittancegirl

@rayckeith Don't think of him as Batman. Think of him as more like Lex Luthor. He even looks the part.
@troed But it's all really complicated. Everything about the brain is dynamic and overlapping. We compute with time and timing patterns, not value, so that's another way in which LLMs aren't the real thing.

@troed It's not so simple. Much projection of the future is LLM-style, pattern-matched against experience. But that produces conflicts, between memories and with what is sensed. The critical part is the "decide which to believe" circuits, centred on cortical pyramidal cells (which have the right behaviour in their dendritic trees).

I think that, in a recurrent loop (Hofstadter-style), induces the sense of self.

@troed Not claiming it was the point. The real point is that LLMs are a partial model, and one of the things that's missing is the "decide which conflicting thing to choose part". Currently that's bolted on from outside (the probabilistic model to choose the next word from the distribution) and that's an inferior technique.

I think that part's also crucial (in mammals at least) for developing a sense of self. No proof.

@troed @zerkman LLMs approximate (poorly, inefficiently, without live feedback) part of what the brain does (the bits that do associative recall principally). Much of the rest of the brain is doing signal processing and motor control (gotta control that meat body), but some is doing much higher order processing, and LLMs don't do any of that.

They perform projections in a (very!) high-order smooth differentiable space. For problems which that describes, they work excellently. But when the problem doesn't fit that pattern, they're not so great. (The underlying math of hypersurfaces doesn't require differentiability, but the training algorithm guarantees it.)

Public LLMs have an awfully large training dataset. Much bigger than you think.