The only two viewpoints on generative AI that get any play among tech punditry are:

1. AI is a lever that helps people do better
2. AI is effective automation that will replace people, or be a threat to them.

The third viewpoint, that AI tools are kind of shit and, if used in their current form at scale by corporations and governments, will “enshittify” large portions of our society, doesn’t seem to register with them at all.

Flaws in systems don't get fixed by people who think they're great and should just be bigger.

They get fixed by people who listen to those who've noticed the flaws.

@baldur this is more broadly applicable to technology, i think foucault wrote about this. technology is an automation time saving abstraction which necessarily cuts corners and dehumanizes. it needs to be used carefully.

that said, every version of OpenAI's GPT released is substantially better than the last.

@baldur I think that's because we are at the beginning of a new hype cycle. And compared to the previous one (crypto currencies and "Web3") I can see some lasting and useful applications in this one.

Said that, the future is STARTING now, we definetely are not there yet. :-)

@martinc The problem is that if AI vendors don't see the flaws in existing systems, they aren't likely to fix them. Instead they'll just focus on making them cheaper, faster, and bigger.
@baldur agreed, but isn't that part of every hype cycle? And if making them just bigger does not really improve things, aren't the market forces going to "correct" this, since training the large models is not exactly cheap?

@martinc Microsoft and Google together basically control over 99% of both the office productivity and search markets.

Since they've both all in on generative AI and have effectively the same strategy, there is next to nothing in the market that can shift them either way.

@baldur @martinc The biggest problem is that the people making those calls don't understand the underlying technology. It's a symptom of the true root cause -- the MBAification of everything. They focus solely on "investor value" with short term gains and have absolutely zero interest in any long term strategic plan other than platitudes. What will push neural networks, etc. to being better will be industries that use the technology as a tool and not and end in itself, like biotech, drug discovery, material science, engineering design. Things that can leverage the technology not just for hype/bullshit generation, but actual physical products that have to actually work or not.
@GradientU0 @baldur @martinc You’re so right about this it hurt to read. It’s something that is so true of where I work and of tech as a whole…but then, it’s like Hollywood. It’s the money men who run the show, not the talent. And we don’t even have unions.

@GradientU0 @baldur @martinc oh no, they know the dangers... They literally published research papers on them.

They just don't care.

They won't be shut down when people die.
No one will go to jail when people die.
No investor will pull their money when people die. Not like they need investors anyways, they both print money.
They won't have to pay the people who's data they use.

And that's the real problem. AI is consequence-free for them.

@baldur @martinc Microsoft and *especially* Google are known for dropping something like a hot potato if it doesn’t take. So I don’t worry too much about them being quick to jump on the bandwagon. The question is whether people are still interested after the hype has died down, and I don’t see FAANG having much control over that.

@martinc @baldur My guess is no. One example:
LLMs will be used to, among other things, fill the www with ”SEO” crap, which I predict will render the net largely useless. Like ordinary SEO on speed.

Quality won’t be, and has never been, a driver in that market.

@baldur @martinc I honestly think it’s even worse than that, they *know* about the flaws and the very concrete harm that comes from releasing those systems in such a state, they actively choose not to care because no authority has yet stepped in to force them into caring, too few people recognize those flaws and, more importantly, doing so makes them money — and AI companies laying off their ethics teams sort of supports this, in my opinion
@zanna_92 @baldur @martinc exactly this — they don't care about harms, only extraction, and preventing regulation is key to keeping the extraction going

@susankayequinn @zanna_92 @baldur @martinc

It might even be worse. Some people might see a benefit in enshittifying everything. Then their customers would have to pay, a lot, for clean information.

But perhaps we don't need the hypothesis of actual malice, just greed.

@zanna_92 @baldur @martinc Big players with trusted brands care about these harms. They have to have hired an ethics team to have fired them. But smaller players like Microsoft are happy to just add it to Bing and ship it, they don't have much to lose in the browser search/knowledge engine space.

There are lots of things that can be done to make generative pretrained transformer's safer though, and it would be good to build up consensus and market demand for them. Watermarks. Citing sources

@martinc @baldur "this is just the beginning, it has so much potential" is the exact same line that they used for crypto/web3, and my god, did that do some incredible damage before it fizzled out and became shitty capitalist background radiation that just low-key destroys people's lives without winding up in headlines
@AmyZenunim @martinc @baldur the thing with generative neural networks (gonna call them GNNs for the rest of this comment) is that they can actually be used in many existing practical applications like web search, code completion or word processors, in contrast to blockchain which only really has been used for tokens of abstract value.

Not that GNNs will necessarily be good at those purposes, I think in their current state they're dogshit for web search, because current GNNs don't know the accuracy of it's output and don't seem to be very able to understand the limitations of their knowledge (in anthropomorphic terms). Basically it doesn't realize if it actually knows something or just talks out of its metaphorical ass. And that'll just be a mass bullshit generation engine.

I think the use of the term "AI" for GNNs is definitely part of the hype machine. In my opinion, something that can't use logical reasoning and spits out inaccurate results isn't worthy of being called artificial intelligence (it arguably might be something more like artificial gut feeling).
@baldur And the fourth, that it is yet more to concentrate power in fewer hands, also gets no time.
@aredridel Oh, yeah. That one _never_ gets any time with any of this crowd.
@baldur The worst part is people and institutions using AI to make decisions or as part of a decision process and how it’ll just enshrine all the mistakes and biases of the past because they’re present in the training data itself or because of biased curation of training data.

@avery_atleast Ugh, yeah. Such a bad idea.

"Let's hand all of our forward-looking decision-making to a hindsight machine stuck in the past and unable to learn."

@avery_atleast @baldur
Yup. I would go as far as to describe this as the primary current risk of AI, not in some sort of looming singularity, but in the idiotic level of trust given it by nontechnical people in positions of power.
@baldur It is, I think, an interesting way of judging the technical authenticity and journalistic integrity of these pundits. Hype should be beyond a real journalist and shouldn't fool anyone with real technical knowledge.
@andy_twosticks @baldur that's the kind of journalism you get when all mass media is owned by a handful of billionaires.

@baldur Genuinely, everyone has given up on the internet or technology ever being used for good again. Everyone who talks optimistically about the net or AI in its current state seems to be the sort of hustle culture weirdos who fit right in on HN.

Were all just floating along like in the film Waterworld, fighting for scraps that are left.

@divclassbutton Yeah, very much this. The discourse in tech circles and in media feels incredibly disconnected from the conversations I'm seeing in the people around me.

@baldur i think some things that are being dominated by ai aren't as interested in precision as one might hope.

an example is adrian black's recent run in with google detecting his second channel as impersonating his first... it's clear that google uses ai as the front line decision maker. their job is to keep youtube good enough that people will risk being run over by it to keep making it money.

the analogous shotspotter has been caught taking requests from police to relocate detections and change the recorded reasons. their real job is to manage a symbiotic relationship with police departments that enables them to capture public funds. the real job of the platform is to be plausibly helping and avoid causing anyone bad enough reputational harm.

i think more things than we believe are like this due to misaligned metrics, and ai will shine in these areas not because it's better but because it enables connected people to capture money flows that currently go to labor (people who aren't connected).

@baldur

How is money to be made with that third one? Nobody would buy that, though it does fit well with move fast break things...

@baldur Once an AI is able to scan and successfully edit a code base that's a decade old with dozen of contributors who all had their own code styles and idiosyncrasies, then it might be time worry.

Seeing these tools generate greenfield projects only scratches the surface of what day to do day programming actually entails (in my opinion).

@baldur I, for one, look forward to having every customer service inquiry go through an endless cycle of circular chatbot reasoning.
@baldur we need some kind of mark to identify AI-free (like “atomkraft? nein danke”) that tells you are looking at the work of a human. Of course some will hate those who use it, not too bothered.

@baldur This reminds me of the age of "Expert Systems" back in the 90's. It's almost literally the same excitement that people had when they discovered computers could be connected and share information... except without the dire warnings.

I imagine a lot of people have never heard of expert systems. I do not wonder why.

@baldur @drahardja That’s more-or-less my view, with the caveat that some forms — with realistic expectations and understanding — are genuinely useful right now.

Long term, though, I’m pretty bullish on these technologies (your “lever” position) — but we have to keep in mind that it’s still very early days and they’re as of yet nowhere near capable of performing a great many tasks without human judgment being involved.

@baldur +1 enshittify
Pluralistic: Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow

@skry @baldur hope we can soon see some a software wothout those intermediaries.
@baldur
the whole business landscape is set up to encourage everyone to take the first barely-viable shitty version of a thing and make it absolutely ubiquitous to the point of choking out all competition forever rather than to wait another five years making a good version first
@baldur also, 2. and 3. are not mutually exclusive
@baldur Not much of a band name, but someday when I'm unhappy with the outcome I'll title some album "Enshitified."
@baldur Eh, corporations are beginning to figure out that they’re tools and that if you remove humans from the loop entirely- bad things happen. But there’s a learning curve on what you can practically use them for and a LOT of dunderheaded experiments that are clearly not going to end well (which anyone sensible could tell from the start).
One fun thing to do is replace “AI” with “golf carts” in any article you read. The overblown drama of how AI is written about is hilarious.

@baldur I agree with you. Just yesterday I noticed how Midjourney is useless for real visual content. It was not able to generate a functioning maze. But the maze is one of the tools of intelligence.

It's still an algorithm and nothing more.

@baldur

Very true.

While job losses in journalism seem to have already started ( I read a convo amount journalists here who were complaining about related lay-offs), the protection of the world’s knowledge on the internet must be protected.

This is why I call for a traceable digital signature for all #AI generated output and unlimited liability (see earlier post):

https://mastodon.social/@HistoPol/110153944497185106

ChatGPT and the Enshittening of Knowledge

Daragh O Brien poses some thoughts on ChatGPT, AI Text Generation, and the Enshittening of Knowledge and what we can learn from Plato.

Castlebridge
@baldur I think Tesla autopilot is already a good example of this
@baldur 1+2:
AI is a great tool for people to shoot themselves and their orgs in the foot with.
@baldur One person I watch on twitter fancies themselves a tech prognosticator so they really have to be all in on AI or what else are they going to prognosticate about right now?

@baldur because that's the goal!

People are more sensitive to price than to anything else, on average, for most cases.

Thus, as long as you can provide a just barely good enough product, the cheaper the better!

@baldur I've read part of this and need to fix it but The Fallacy of AI Functionality really caught my eye

https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533158

The Fallacy of AI Functionality

@baldur This company claims to use "AI" and "machine learning" to make websites accessible. In reality, they make things worse. But for $49/month, you can pretend your website is compliant.
https://www.nbcnews.com/tech/innovation/blind-people-advocates-slam-company-claiming-make-websites-ada-compliant-n1266720
Blind people, advocates slam company claiming to make websites ADA compliant

Blind people and disability advocates have been speaking out and suing companies that use AccessiBe, an automated accessibility tool for websites.

NBC News

@NIH_LLAMAS

😬 There's going to be so much of this kind of crap.

@baldur I think it is because none of them want to provide a "640k aught to be enough for anyone" or "the internet will have no greater impact on the economy than the fax machine" style quote that will haunt them for decades.