Dan Conway (@magisterconway.bsky.social)

So, um... this is bad. Really bad. I looked at the letters that were translated by the AI, and the very first one I found was almost entirely hallucination. Thread: [contains quote post or other embedded content]

Blacksky

"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"

then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.

some things are worse than nothing. "signal-shaped noise" is worse than nothing.

@elilla "signal-shaped noise" is a very apt description. Thank you.
@laprice @elilla
This is spot on! I could point to a lot of equations that prove exactly this, but there is no need for it - we intuitively know it is true!

@elilla I saw a project doing this for NYC council meetings, and brought up the inevitability of mistranscriptions (and bias in whose testimony would be more accurately transcribed). They pushed back saying it was fine and users could submit corrections. I decided to check whether my hypothesis was correct and mistranscriptions were common. I followed an upthread link to a transcription and started reading through to see how long it would take me to spot an error. Literally the next speaker had serious errors in the transcription.

The kicker: NYC already provides human-taken transcriptions. But these are released "too slowly" and it's "too hard" to even implement a feature to automatically check the machine transcriptions against the human ones when they're available.

The second kicker: in the video with the error, the human scribe interrupted to ask the speaker to speak louder/more clearly so that they could get an accurate transcript.

Anyways I got blocked because the project (which aside from using AI for transcription was pretty cool) was super useful for civic engagement and my objections were intolerable.

@elilla I am continuously arguing the accuracy debt angle. I have made significant strides in some areas of building ai 'bulkheads'.

Low background radiation steel is a good analogy I use. There is before, and there is after.

@elilla I think it's just nature healing from Guttenberg's press drowning the scribal tradition in a deluge of soulless mechanical tat.
@elilla There's a great book by Stefania Tutino called A Fake Saint and the True Church about the forgery of a saint out of letters between Naples and Rome in the 17th C. No AIs were necessary, just lots and lots of letters. As my favourite linguist points out there's no way to guarantee the veridity of discourse at the level of discourse itself. Never has been. AI didn't change that.

@elilla transcription / translation is one of the areas where I see a good use for LLMs at the moment. But, only as a first-pass.

I use Speech Note to do a first pass at transcribing audio from talks and such that I will write about. But I also go back and watch the talk and clean up the transcript -- I'm not blindly trusting the output, I'm just trying to speed up the act of typing it out and saving some wear and tear on my hands.

An LLM-generated translation or transcription that is not verified is, IMO, generally a dangerous thing. It might be fine for local use to try to get the gist of something, but no organization should be publishing those types of things without verification.

@elilla I experimented with using ChatGPT to do OCR on old scanned assembly code listings.

Columnar text has always been a huge challenge for OCR, and I had already tried Tesseract and given up on it.

At first I thought the results from ChatGPT were a revolutionary leap in the state of the art.

Then I looked closer - it had reworded the comments and headers. It even changed the code in places, swapping out entire mnemonics and parameters.

Like any good sloperator I tried to prompt may way around this, which was met by effusive apologies and assurances that it would, going forward, be sure to never do that again.

Which of course, it immediately did.

I suspect there's only the most tenuous thread of context between a "multi-modal" LLM's text and image capabilities - they're basically just two models duct-taped together.

I find this particularly disturbing as if someone simply doing an editorial pass looking for spelling or grammar errors may not notice that the content appears fundamentally correct, but was actually altered.

I would rather wade through a sea of Tesseract's obvious typos than have to take on the much higher cognitive burden of making sure grammatically correct sentences weren't invented wholesale.

@elilla Signal-shaped noise is a great term, thank you for that one.
@elilla "Signal-shaped noise" is an utterly brilliant characterization of what "gen AI" produces.

@elilla I’m a data engineer. I’ve been saying for years if not decades: “Bad data is worse than no data”. And, generally, when people hear that, they agree with me.

When I point out that genAI produces bad data, the turnaround to “oh, but, so useful”, “early days”, etc, is quick and disheartening.

@elilla @jacel As someone who likes using (but not remotely relying on) automated transcription and notetaking that way... as far as I'm concerned, if anyone's *training* on that stuff, then they deserve exactly what they'll get. And if whatever big corporation is *putting that stuff in training sets*, then they need to quit shitting where they eat.

@lorxus @elilla yeah, but.

You know whose training set all these things that are being indiscriminately vomited into the informational substrate of humanity /do/ end up in?

The people's c.c

@jacel @elilla As in, people will read those and uncritically accept it? Or something else?

@lorxus that is certainly where the marketing is pushing folks, and where many people are happy to be led based on my day to day interactions.

But ultimately generating misleading slop is just so much easier than making actual information now. It's going to crowd out all the worthwhile stuff with plausible not-quite-equivalents.

@elilla SIGNAL-SHAPED NOISE
@elilla Earlier today I reflected on how AI generated closed captions on local news here in Sweden are too exact. When a human does them in Sweden they remove filler words and repeat words. When they suddenly are there it takes more cognitive effort to read what people are saying.
@elilla Wrong information is so not better than nothing. 😅

@elilla

Thing is, our myths and literature have been telling us this for millennia!

*All* the oracle stories involve an oracle saying something ambiguous, which the protagonist dangerously misinterprets. It will always be mushy, you'll always choose the wrong interpretation, and it will always be your fault. In that sense, saying "you have to check the AI result" is a threat, meaning the AI is free to make mistakes, but you will be held liable.

This is not positive information; it is almost *negative* information in that we still don't know the truth, but are tempted into dangerous fantasies of misinterpretation.

We've even turned the whole mess into a cautionary tale with the "ibis redibis" story of the oracle at Dodona, a caution heeded nowadays by almost nobody:

https://en.wikipedia.org/wiki/Ibis_redibis_nunquam_per_bella_peribis

Ibis redibis nunquam per bella peribis - Wikipedia

@elilla I have direct experience of this. There's a handwritten letter from my grandfather dated around 1914 that turned up in a box of stuff. It's in cursive and younger people are less familiar with cursive so a family member put it through chatgtp. The result was, as you'd expect, vaguely similar to what was written, with some alarming inaccuracies. And it missed the actual point he was writing about.
I'm old enough to read cursive and I've had some recent experience making out other old writing in much worse hand, so I could read it quite well. A couple of words were hard to decipher but not impossible.
So my conclusion was that the AI transcription was worse than useless.