I sure wish you wouldn't post LLM output.

Not even to make fun of it.

The fact that you're sharing it means that you're using it. The fact that you're using it means that you're counted as a user. The fact that you're a user means that you're helping extractive-capitalist con men steal money while simultaneously worsening the climate crisis.

@GeePawHill Castigation is your right, but understanding shows nuance. Why do you think some people use it? Beyond the glaringly obvious of 'well it's easy and I don't have to try hard?'
Alt-text. A new hotel room with a touch-screen remote on the wall.
In that same hotel room, 'These bottles all feel the same. Which one is shower gell, shampoo and conditioner?'

Blind people can ask these questions of AI at 3 in the afternoon or 3 in the morning without having to bother others.
It uses LLM behind the scenes to describe that picture, and you seriously want to tell me that's so terrible?
https://universeodon.com/@FreakyFwoof/114893117007293373

Andre Louis (@[email protected])

People: 'I hate AI!' Also people: 'Image with no description' Know what that means? one of two things in my case. 1. I ignore your post. Or 2... I use #AI to check out your post. How'bout that? Can we get a t-shirt that says 'Always add #AltText'

Universeodon Social Media

@FreakyFwoof @GeePawHill

I thought image recognition models were a separate class of machine learning models than LLMs.

I have seen at least one article about speech recognition getting worse as the more specialized models used in that field get replaced with LLMs

@gbargoud @FreakyFwoof @GeePawHill I'd be interested in reading that article about speech recognition. As far as I understood, unlike image recognition, speech recognition is still a separate model.

@danovz @FreakyFwoof @GeePawHill

I'm having trouble finding it because search has become extremely hard but it was basically showing that LLM-enhanced models were predicting things that were never said rather than transcribing the speech. I don't know if the tech has improved since then though.

@gbargoud @danovz @GeePawHill For my podcast I use something called MacWhisper which uses a local model to transcribe my text into speech for subtitling. It very rarely gets things wrong.
If you feed it random noise it will happily make up stupid crap but for actual real-world use it's very good.

@FreakyFwoof @danovz @GeePawHill

Based on some quick research: macwhisper uses whisper under the hood which is a speech recognition specific ML model.