ThatIsNotHowThatWorks

5 Followers
37 Following
164 Posts
I'm a scientist living in Germany doing X-ray spectroscopy. (He/him, Dr. rer. nat.)
metaverse wasn't inevitable
ar glasses weren't inevitable
blockchain wasn't inevitable
nfts we'rent inevitable
ai isn't inevitable
nothing is inevitable!!!!
the ultimate hidden truth of the world is that it is something that we make
and could just as easily make differently
make choices
make choices
make choices
The Metaverse.

This isn't just about AI, and I hope my friends who run their own experiments with local LLMs and who use chatbots to as a sounding board rather than as an obsequious servant or unpaid robotic coding intern understand I'm not talking about them.

We (the US) have a president who wants all the respect of the position without doing the work of being a national executive or showing competence and vision in leadership, a Secretary of Defense trolled into a half-assed, doomed, failure of an intervention, a HHS chief and all his giblet-brained underlings that fancy themselves health professionals armed with homeopathic levels of ability and overinflated delusions of adequacy. Don't forget Brilliant Auto Business Genius whose flagship project is a low-poly asset rejected concept art for 1997's "Carmageddon" that has sales worse than the Edsel. Every C-suite malingerer whose primary competencies are being tall, white, male, and credulously overconfident who wants all the monies but doesn't want to have employees or accountability or a product or service anyone wants or needs. "Gig" employers that cosplay as banks, hotels, taxi services, delivery services. Web search engines that spew randomized text rather than links to authoritative and correct information sources.

Atlas shrugged then laid off everyone who knew what they were doing because his best friend ChatGPT told him to. The same societal endgame but there is no Galt's Gulch full of Libertarian Übermensch, just hundreds of thousands of idled professionals helplessly watching toddlers "driving" fire trucks, "flying" planes, "writing" software, "creating" art, etc. A societal disaster, a complete civilizational self-own, promulgated by modern day tulip speculators and assorted fascist-adjacent financiers.

I don't see any of this getting better until the adults among us pick up the toddlers, take away their toys, and put them all down for a nice long nap.

yt comment:

> Remember: The dumbest person you know is being told 'you are absolutely right' by a LLM right now.

mit: the design must be correct in all observable aspects. incorrectness is simply not allowed.
new jersey: the design should be correct in all observable aspects. it is slightly better to be simple than correct.
slop: the whole system needs to be done and shipped in under a week at all costs. everyone is fired. ive never heard the word "correct" said out loud.

And they are right. LLMs make it easier for devs to do work that doesn't matter in an industry that doesn't care, where the only thing that's measured is some bullshit measure that's disconnected from actual outcomes

Many of those most vocal about the dysfunctions of LLM-coding were ALREADY WARNING ABOUT THE DYSFUNCTIONS OF THE SOFTWARE INDUSTRY BEFORE "AI". The dysfunctions predate this particular bubble and many in software have been concerned about them for years.

There is little to no downside to poor software quality and the upside of doing the job well is limited compared to tactics like lock-in, dishonest subscription models, and monopolies

Some corners of the software industry are less affected. Others, such as web dev, are more affected

For example, the stock price for Crowdstrike, even in a stock market affected by the Iran war, is up 12% today over its peak before it

Massive worldwide economic harm, no real consequences

Developers: "The problem is that the faster we go, the more we get things wrong and have to fix them or do them over again, which actually makes us slower overall"

Big Tech: "I see, yes. What if... hear us out... what if you could get things wrong much, much faster?"

As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

It's literally a description of how they work.

The so-called training data is used to build a huge database of words and the probability of them fitting together.

Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.

Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

Most of my work is considered Export Controlled Information. I'm not legally allowed to divulge it to unauthorized parties under threat of federal jail time. Turning off background third-party AI isn't an option for me; it's a legal requirement. Peddle those "purity culture" claims elsewhere; I'm not going to risk federal jail time to prop up your market bubble or techbro neo-ELIZA religion.