dorian f moore

@dorianfm
144 Followers
192 Following
1.5K Posts

Web Developer, Designer & Technologist
The Useful Arts https://theusefularts.org
The Wire https://thewire.co.uk

Not sure anyone should listen to or care about what i say or think.

Trying to be content, not create content.

#ListeningNotOpining

⭐️ means i empathise
retoot means i agree /acknowledge/appreciate

Based in #Edinburgh and #WestKilbride

Designer & Developerhttps://www.theusefularts.org
Personalhttps://dorian.fraser-moore.com
Work & Musichttps://www.thewire.co.uk
You don’t need to understand the math behind large language models in order to get that they are not minds.
No longer interested in talking about open source unless the conversation starts with how to build an alternate open source community without "AI code assistant" users contributing to it

My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.

LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.

In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.

But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.

If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.

I’ve read a bunch of posts in the last few weeks that say ‘Moore’s Law is over’, not as their key point but as an axiom from which they make further claims. The problem is: this isn’t really true. A bunch of things have changed since Moore’s paper, but the law still roughly holds.

Moore’s law claims that the number of transistors that you can put on a chip (implicitly, for a fixed cost: you could always put more transistors in a chip by paying more) doubles roughly every 18 months. This isn’t quite true anymore, but it was never precisely true and it remains a good rule of thumb. But a load of related things have changed.

First, a load of the free lunches were eaten. Moore’s paper was written in 1965. Even 20 years later, modern processors had limited arithmetic. The early RISC chips didn’t do (integer) divide (sometimes even multiply) in hardware because you could these with a short sequence of add and shift operations in a loop (some CISC chips had instructions for these but implemented them in microcode). Once transistor costs dropped below a certain point, of course you would do them in hardware. Until the mid ‘90s, most consumer CPUs didn’t have floating-point hardware. They had to emulate floating point arithmetic in software. Again, with more transistors, adding these things is a no brainer: they make things faster because they are providing hardware for things that people were already doing.

This started to end in the late ‘90s. Superscalar out-of-order designs existed because just running a sequence of instructions faster was no longer something you got for free. Doubling the performance of something like an 8086 was easy. It wasn’t even able to execute one instruction per cycle and a lot of things were multi-instruction sequences that could become single instructions if you had more transistors, Once you get above one instruction per cycle with hardware integer multiply and divide and hardware floating point, doubling is much harder.

Next, around 2007, Dennard Scaling ended. Prior to this, smaller feature sizes meant lower leakage. This meant that you got faster clocks in the same power budget. The 100 MHz Pentium shipped in 1994. The 1 GHz Pentium 3 in 2000. Six years after that, Intel shipped a 3.2 GHz Pentium 4, which was incredibly power hungry in comparison. Since then, we haven’t really seen an increase in clock speed.

Finally, and most important from a market perspective, demand slowed. The first computers I used were fun but you ran into hardware limitations all of the time. There was a period in the late ‘90s and early 2000s when every new generation of CPU meant you could do new things. These were things you already had requirements for, but the previous generation just wasn’t fast enough to manage. But the things people use computers for today are not that different from the things they did in 2010. Moore’s Law outpaced the growth in requirements. And the doubling in transistor count is predicated on having money from selling enough things in the previous generation. The profits from the 7 nm process funded 4 nm, which funds 2 nm, and so on.

The costs of developing new processes has also gone up but this requires more sales (or higher margins) to fund. And we’ve had that, but mostly driven by bubbles causing people to buy very-expensive GPUs and similar. The rise of smartphones was a boon because it drove a load of demand: billions of smartphones now exist and have a shorter lifespan than desktops and laptops.

Somewhere, I have an issue of BYTE magazine about the new one micron process. It confidently predicted we’d hit physical limits within a decade. That was over 30 years ago. We will eventually hit physical limits, but I suspect that we’ll hit limits of demand being sufficient to pay for new scaling first.

The slowing demand is, I believe, a big part of the reason hyperscalers push AI: they are desperate for a workload that requires the cloud. Businesses compute requirements are growing maybe 20% year on year (for successful growing companies). Moore’s law is increasing the supply per dollar by 100% every 18 months. A few iterations of that and outsourcing compute stops making sense unless you can convince them that they have some new requirements that massively increase their demand.

As Rachel Reeves (finally) swings behind an explicit statement that Brexit has cost the UK around 8% of GDP, we might ask what that looks like:

It equals around £224bn a year (in 2025), which given the UK's tax burden of around 35% is £78bn in lost tax revenue a year.

So if you wondering why the public sector is under-funded, while a continuing austerity logic is always in play, this lost tax income for the state is also contributing to budget shortfalls.

#Brexit #politics
h/t Observer

MASSIVE turnout. Well done #Edinburgh. We are the people.
The Metaverse.

Audre Lorde's "The master's tools will never dismantle the master's house." is not just a statement about a tool being tainted by its origin. It's about what kind of tool a "master" would create: Whips. Chains. Violent suppression.

That's the meaning: You cannot just take tools whose purpose and politics is dominance and violence and "make them liberatory". This goes deeper than "just" embedded politics or lofty talks about ethics, it comes down to what kind of relations you believe do and should and must not structure the world.

The Ends of AI · Sycophancy and psychosis. "AI under fascist capitalism remakes the entire world into Epstein’s island." @mel_hogan

👉🏻 https://disjunctionsmag.com/articles/ends-of-ai/

One of the ways LLMs are damaging society is by causing us to endlessly talk about LLMs when we could be doing something creative.