When I use generative "AI" it invariably contains mistakes. That's just reality at the moment.

So when I see company after company adding AI to every product in sight, I now assume that all of those products are "creating" flawed results, by design.

This is crazy. (Google search is, maybe not coincidentally, crappier than ever.)

We need a whole genre of products that don't contain AI. Maybe they, too, are irredeemably flawed, but at least we'll know the mistakes aren't a design decision.

I don't share the pure contempt for generative AI that some folks express so vehemently. I use it for brainstorming -- e.g. making lists of things I might want to consider when looking at a specific topic -- and it's fine for that.

I do think a lot of what's happening will be seen, in retrospect, as just the latest scam to liberate rubes from their money.

@dangillmor

Feel like AI/LLMs are just a new computer programming technique/capability. The use to which it is put is the issue. The use of these techniques to help diagnose cancer or interpret astronomy survey data is wonderful. The use to spread misinformation and perpetrate scams is awful.

Also, it seems for AI/LLMs to be useful, great care must be taken as to what you shovel into the dataset. As the old saying goes "garbage in, garbage out."

@mastodonmigration @dangillmor I don’t know if I would trust AI to correctly diagnose me without making something up. I could be wrong, of course.

@mastodonmigration @dangillmor I have it on good authority that the last four words there, about what comes out is what went in, is the watered down version.

💩 📥 💩 📤

@dangillmor

On the plus side, a lot of these things are "AI washing" using mechanical Turks. So, some of these things may use underpaid slave labor instead of mechanical BS generators.

@rrb @dangillmor Amusing that the term "mechanical turk" has now come full circle.
@dangillmor I use AI image generators because I find those mistakes, glitches, artifacts, and hallucinations, funny and weird, and I like funny, weird, useless, meaningless, yet strangely pretty, images. I collect everything I have the machines generate for me in a folder labelled, "elektrodada".
@dangillmor I suspect that brainstorming is the *only* valid use, because it's the only time we don't mind if ideas are wrong; that's legitimately part of the concept of brainstorming.
@dangillmor We may need a labeling law requiring products using AI to clearly indicate this (not just buried in disclaimers) along with details of the implementation that are relevant.

@lauren @dangillmor

Digital products such as audio, video and photos should be labeled if they were created by AI.

@MichaelBishop @dangillmor And the first step needs to be: Define AI.

@MichaelBishop Pretty much all mobile phone camera software now uses some kind of AI for picture-taking. (Google and Apple are leaders in this.) Should they be required to have a watermark disclosing this?

@lauren

@dangillmor @MichaelBishop This is why the definitions matter. There is a grey area between "conventional" processing and AI that is not well defined currently.
@lauren Yes, indeed. And people need to understand that this stuff has been in use for some years now, with sometimes grotesque consequences as programmers' work has harmed or ruined people's lives (financial engineering and prison sentencing, for example) @MichaelBishop
@lauren @dangillmor @MichaelBishop Unfortunately “AI” has become attached to a highly specific technology (essentially the “neural network” approaches used in LLMs and generative image models). It’s a neat tech, but historically *lots* of computation could be argued to be “artificial intelligence”. (What is a chess-playing computer if not artificial intelligence in a particular domain?)
@michaelgemar @dangillmor @MichaelBishop Poe correctly theorized that it was a midget.
@lauren @dangillmor @MichaelBishop Well, I doubt there’s a little person inside Deep Blue.

@dangillmor

Seen this too. It's like someone decided it was a good idea to connect the sewer pipe to the water supply line. Now, in plumbing, if this happens, it takes a massive effort to re-purify the system.

@dangillmor that's because GIGO, and these models are trained indiscriminately on information from the Internet. The GIGO issue can only get worse as more AI-generated data, full of errors from its training, gets published.
@dangillmor this should be part of the product advertising! Like when the Queen released that album in the 80s ("no synthetizers!")
@dangillmor Well, I don't believe the Duck Duck go search does, but it usees Bing so take that with a grain of salt and use Searx.
@dangillmor As far as I know, the Feddiverse doesn't contain AI yet lol.
@dangillmor In the dystopian future, we will all pay more for things to suck less. Like how people pay more for organic stuff that comes without all those pesky carcinogens. Who knows, maybe "AI-free" will become a selling point.

@briankrebs @dangillmor

I think AI may rapidly become a selling point.

I’m curious as to creators will tag our work to make it clear it is Ai free?

@mlevison @briankrebs @dangillmor I think that those who choose to use AI should disclose it. Those of us who don’t should not have to.

@nomadnewyork @briankrebs @dangillmor

Sadly in the medium term users will some generated. We need to make an effort to stand out.

@dangillmor If capitalism would work the way its supporters and theorists say it does, then right now there would be a lot of businesspeople trying to invest in guaranteed AI-free tech, to gain market share among people who prefer that.
@dangillmor Now you want products that don’t contain AI. Later you’ll want light bulbs and refrigerators that aren’t even connected to the internet. Where will it end?

@dangillmor You know how they make machines and things that don't last as long as they could and they wear out?

So like that but with AI... 🤔

(Interestingly so far no one in the comments has thought this... they have other cool things but yeah; just a thought)

@dangillmor I have to wonder, when you say they invariably (which means always) contains mistakes, what gen AIs are you using and what are you using them for? And what is your standard of correctness? Given that gen AIs can outperform most humans in some tasks, even tasks made to test humans (like exams), how do you reconcile your statement with these findings?
@dangillmor Just waiting for the "Classic" wave of products to return a la Coke Classic.
@dangillmor It is weird sometimes. "Google is your friend" has gone to " I don't trust you anymore and your crazy results" frustrating " Ok, let us try this again for the 5th time" Grrr" 🙁😒😒😒😡
@dangillmor People make mistakes, software has bugs, all our work is flawed "by design" if you will.
We agree that AI is wildly over-hyped, but what's categorically different about AI that you are worried about?
My sense is if anything it's that AI scales well, but we've already seen that with modern compute infrastructure.
@dangillmor agreed - I don’t see the present value in content creation because of the errors in what is produced. The value I see at this point is in applying AI to the content I have read and annotated. Putting words together and organizing ideas is helpful.
@dangillmor yes, I have come to the same conclusion after using generative AI.