Bill Caputo

158 Followers
117 Following
260 Posts
http://agnomia.com http://williamcaputo.com
Extreme Programming OG, and otherwise human programmer. This account is interested in programming, philosophy, systems, orgs, puns and cute animals. Will freely mute/block/unfollow as needed to focus my timeline on these topics.

what we thought we knew about autism and... whatnot

'For decades, researchers had been measuring the wrong thing. Conflating communication style differences with empathy deficits produced dramatically inflated effect sizes and an illusion of empathy impairment'

#autism #actuallyAutistic #science #psychology

https://www.psychologytoday.com/us/blog/positively-different/202601/what-the-world-got-wrong-about-autistic-people

What the World Got Wrong About Autistic People

For decades, autism research compared autistic people to animals, denied them moral sensitivity, and assumed autistic traits made them miserable. All wrong.

Psychology Today
Gas not found

I’ve read a bunch of posts in the last few weeks that say ‘Moore’s Law is over’, not as their key point but as an axiom from which they make further claims. The problem is: this isn’t really true. A bunch of things have changed since Moore’s paper, but the law still roughly holds.

Moore’s law claims that the number of transistors that you can put on a chip (implicitly, for a fixed cost: you could always put more transistors in a chip by paying more) doubles roughly every 18 months. This isn’t quite true anymore, but it was never precisely true and it remains a good rule of thumb. But a load of related things have changed.

First, a load of the free lunches were eaten. Moore’s paper was written in 1965. Even 20 years later, modern processors had limited arithmetic. The early RISC chips didn’t do (integer) divide (sometimes even multiply) in hardware because you could these with a short sequence of add and shift operations in a loop (some CISC chips had instructions for these but implemented them in microcode). Once transistor costs dropped below a certain point, of course you would do them in hardware. Until the mid ‘90s, most consumer CPUs didn’t have floating-point hardware. They had to emulate floating point arithmetic in software. Again, with more transistors, adding these things is a no brainer: they make things faster because they are providing hardware for things that people were already doing.

This started to end in the late ‘90s. Superscalar out-of-order designs existed because just running a sequence of instructions faster was no longer something you got for free. Doubling the performance of something like an 8086 was easy. It wasn’t even able to execute one instruction per cycle and a lot of things were multi-instruction sequences that could become single instructions if you had more transistors, Once you get above one instruction per cycle with hardware integer multiply and divide and hardware floating point, doubling is much harder.

Next, around 2007, Dennard Scaling ended. Prior to this, smaller feature sizes meant lower leakage. This meant that you got faster clocks in the same power budget. The 100 MHz Pentium shipped in 1994. The 1 GHz Pentium 3 in 2000. Six years after that, Intel shipped a 3.2 GHz Pentium 4, which was incredibly power hungry in comparison. Since then, we haven’t really seen an increase in clock speed.

Finally, and most important from a market perspective, demand slowed. The first computers I used were fun but you ran into hardware limitations all of the time. There was a period in the late ‘90s and early 2000s when every new generation of CPU meant you could do new things. These were things you already had requirements for, but the previous generation just wasn’t fast enough to manage. But the things people use computers for today are not that different from the things they did in 2010. Moore’s Law outpaced the growth in requirements. And the doubling in transistor count is predicated on having money from selling enough things in the previous generation. The profits from the 7 nm process funded 4 nm, which funds 2 nm, and so on.

The costs of developing new processes has also gone up but this requires more sales (or higher margins) to fund. And we’ve had that, but mostly driven by bubbles causing people to buy very-expensive GPUs and similar. The rise of smartphones was a boon because it drove a load of demand: billions of smartphones now exist and have a shorter lifespan than desktops and laptops.

Somewhere, I have an issue of BYTE magazine about the new one micron process. It confidently predicted we’d hit physical limits within a decade. That was over 30 years ago. We will eventually hit physical limits, but I suspect that we’ll hit limits of demand being sufficient to pay for new scaling first.

The slowing demand is, I believe, a big part of the reason hyperscalers push AI: they are desperate for a workload that requires the cloud. Businesses compute requirements are growing maybe 20% year on year (for successful growing companies). Moore’s law is increasing the supply per dollar by 100% every 18 months. A few iterations of that and outsourcing compute stops making sense unless you can convince them that they have some new requirements that massively increase their demand.

@regehr @SnoopJ @dev Clockwork Angels is a masterpiece.
How is AI affecting enrollments in college computer science programs? Lots more enrollments from people wanting to work with/on/for AI technology? Lots fewer because people think CS skills are now irrelevant or will be by the time they graduate? Something else?
I want community culture now. People sharing what they love with one another without the government or some algorithm tracing it all. I want my friends to recommend new artists whose albums I should buy and share. Recommend old TV shows to download and watch. Stuff like that.
I'm looking for someone to create some svg pictures for me. Where on the internet could I go to look at portfolios of vector art illustrators? (Actual illustrators, not people who prompt computers to generate stuff.)

My latest Thoughts On: the state of the tech industry today, what I'm betting it will look like after the AI bubble implodes, and how a 20 year old blog article shows it's all my fault: https://agnomia.com/blog/software-as-prosthesis-revisited.html

#blog #ai #softwareengineering #philosophy

Thoughts On: Software as Prosthesis, Revisited - Agnomia

TIL: @kagihq speaks "LinkedIn"
@dev to a Branch if not equal?