Somehow, it always seems to be over the *next* hill.
I'm beginning to wonder if what they're actually selling is hills.
@jasongorman yes. Web3, Metaverse, Crypto.
A fool who not sees the pattern.
Alchemy became biotech, industrial chemistry and big pharma. Each one a trillion dollars industry...
...be patient and drink your mercury potion!
This post is gold.
Soon my new novel "Sandbags & Barbwire" goes on sale
A more modern spin on a World War I tale!
• Apple knows there's no gold, but Google seems to think there's gold, so Apple shareholders demand they start mining.
• Google knows there's no gold, but Facebook seems to think there's gold, so Google shareholders demand they start mining.
• Facebook knows there's no gold, but Apple seems to think there's gold, so Facebook shareholders demand they start mining …
@negative12dollarbill It's a Ponzi scheme. Just like crypto, Web3, Big Data, etc etc.
It's not about gold. It's about the price of land.
@jasongorman @negative12dollarbill speaking of, look at the WebCoin and “The Orb” stuff Altman is up to with a second company.
Tl;dr “Damn we made something that has poisoned the information well of society by slopping up every damn thing? Well now I guess we gotta come up with a way to verify that people are real people and not just bots!
Enter: the blockchain.”
That alone should be damning and torpedo the entire industry, but the nominally literate appear to be the ones in charge, so we have to join them with their gold rush 🙃
@jasongorman well you could have made the same argument for search engines 20 years ago. massively energy-intense, unprofitable, used by (comparatively) few people. and take Bing as an example: nobody I know uses that.
I don't want to say everything AI is cool and unproblematic, but the nihilistic, Manichean rethoric is kinda tired by now. there's lots of good (many still potential---the tech is like 3 years old) uses of the new AI tools and the environmental concerns are weirdly specific (I've heard maybe 2 people in my entire life ever complaining about the environmental impact of social media or search engines... what's the sudden worry?)
@mc There's one difference, though. Search engines work 🙂
Well, they used to, at least...
The notion that we can solve ALL THE PROBLEMS or replace ALL THE EXPERTS with what is, at the end of the day, autocomplete is just the latest tech growth story after the last one ran out of steam.
It's about share prices.
LLMs can be a breakthrough technology even without all the absurd claims by marketing departments.
About the energy, what if ad hoc hardware, like analog or photonic processors, will reduce the consumpion by 100-1000 times?
@jasongorman that's a strawman. you don't need to believe AI is all-powerful (or even expert-level) to see it's already useful to many people, for many little tasks, and that it has potential to be integrated *well* in existing tech and massively improve it (eg imagine the implications for accessibility if you could directly ask your vocal synthesiser about what's on the page).
like, this sentiment wasn't around *at all* before the current AI hype wave, so ita 100% an overcorrection to the nauseating AI force-feeding of the latest years, so I get it.
@mc There's a growing body of evidence that they have zero net impact on productivity, and at a massive cost.
https://www.theregister.com/2025/04/29/generative_ai_no_effect_jobs_wages/
We see this in the 2023/24 DORA data. While individual developers anecdotally report productivity gains, they evaporate at the level of what teams are actually delivering.
And we have a pretty good idea how this paradox is caused. It *feels* faster in the short term, in much the same way that skipping unit tests *feels* faster.
@mc @jasongorman
> you could have made the same argument for search engines 20 years ago. massively energy-intense, unprofitable, used by (comparatively) few people.
what are you talking about.
We were all using google in 2005, because it was damn good, practically all the time.
There was no debate about whether google was good for anything, because people generally found what they were looking for.
@mc @jasongorman
> I don't want to say everything AI is cool and unproblematic, but the nihilistic
*nihilistic*.
What the fuck are you talking about.
if i criticize "a.i.", that doesn't imply that i think there's *nothing* worth believing in.
@mc
People can't remember/list all the problems associated with it at all time, but it seems like "AI" has all the problems of all the other things.
☑️ Profiting from people's work without compensating them
☑️ Exploiting and psychologically maiming underprotected workers for ridiculous hourly or task-based pays
☑️ Amplifying existing bias and discrimination
☑️ Using resources (energy, water…) needed by people
☑️ Polluting (greenhouse gas, rivers and ocean warming, poison dug out of the ground to extract metals…)
☑️ Being mostly controlled by the same people/companies trying to build yet another mono/oligo-poly
…
…
☑️ Working hand in hand with surveillance capitalism to bypass people privacy to get more data
☑️ Facilitating the spread of disinformation and the mass manipulation of opinion
☑️ Arbitrary censorship
☑️ replacing creative jobs with soul-crushing ones
☑️ Lack of accountability (bots will not magically become sentient, but I can see them being recognized as legal people in a few years, like companies already are, so they can take the fall for humans)
…
☑️ The mostly-empty promise to solve all the problems it creates with more of itself
But also new problems, like
☑️ Being insecure by design because "user generated input" is part of the code
@jasongorman @mc I agree that it's not perfect on big code bases (yet), but it certainly adds value with certain tasks, like writing documentation, test cases or refactoring.
I'm not a frontend developer. My hobby website wouldn't be live without AI.
Whenever I have input into a hiring decision, I look for any mention of AI or AI powered tools on the resume. That's an instant do not hire.
Be careful writing bugs faster just increases the supply of problems elsewhere in the system. Just because many people do something doesn’t make it a good idea.
Writing bugs faster isn’t super power
Caveat emptor, I use local LLMs as a critic for my writing and sparring partner for ideas. I also use it for needle in the haystack pattern matching problems.
So a somewhat useful technology in some cases.
@mlevison @jasongorman @mc I completely agree, but you can say the same about a good IDE: it lets you write code faster, but if you write bad code, that's a bad thing.
That's why I always review the LLM's code manually, and I generate tests for all pieces of code. Of course you shouldn't just accept everything the LLM suggests, but if you use it properly, it is a very useful tool.
Just because many people complain doesn't make it a bad idea.
Strangely I'm just about to unconference this very topic: https://agilealliance.social/@mlevison/114459746587505566
Critical thinking and LLM generated code is harder than we expect.
Consider a few things that might help:
- Start with BBD/TDD style Test First
- Work in small increments that you can understand realistically I can comprehend 20-30 lines of code in a few minutes
- Assume there will an increase in duplication and complexity so refactoring is way more important
....
Attached: 2 images AI Magic Beans Friend or Foe Global Scrum Gathering Munich. Unconference offering today and today only in the lobby. Offered at 10:30am and again at 1:00pm. Discuss Three Principles and Four Guidelines for using GenAI and GenAI infused tools. Background. People are writing Code and User Stories with GenAI (strictly speaking, LLM tools). Are they helping or harming their teams? #gsgmun25 #agile #munich #networking
FWIW on the last point there is strong evidence (See GitClear study), that GenAI is increasing duplication and reducing refactoring.
Also from my experience, when asked to generates examples, test cases it always misses key scenarios.
For the foreseeable future just assume these tools get arithmetic and mathematics wrong.
They might offer the right steps in a calculation (useful). And still get the wrong answer.
Key thing to remember - an LLM is a text prediction tool, not a calculator.
We agree on much, but I don''t think that statement is correct.
These tools just predict which token is the next best response to your input.
@mlevison @erwinrossen If you also consider the 2023/24 DORA data, it seems that there's an overall net negative benefit in terms of the complete delivery process. LLMs tend to create larger change sets, which exacerbate downstream bottlenecks like longer code reviews, bugs slipping through and merge conflicts.
This is the standard Big Tech model of "disruption" at work. Take something and make it worse, displace the original workforce so there's no easy way back, and lose $billions doing it.
Curiously I said something similar here: https://agilepainrelief.com/blog/is-ai-making-your-organization-fragile-or-more-resilient/
@captain_acab These successive bubbles, starting with dotcom 1.0, have been very effective engines of greater and greater wealth concentration. To the point now that some investors can effectively buy governments.
I don't see it ending well.
Fool's gold.
this sounds like some kind of silly trend 🤑Look we’ll get the sulfer to mercury ratio right any day now.