@elilla I experimented with using ChatGPT to do OCR on old scanned assembly code listings.
Columnar text has always been a huge challenge for OCR, and I had already tried Tesseract and given up on it.
At first I thought the results from ChatGPT were a revolutionary leap in the state of the art.
Then I looked closer - it had reworded the comments and headers. It even changed the code in places, swapping out entire mnemonics and parameters.
Like any good sloperator I tried to prompt may way around this, which was met by effusive apologies and assurances that it would, going forward, be sure to never do that again.
Which of course, it immediately did.
I suspect there's only the most tenuous thread of context between a "multi-modal" LLM's text and image capabilities - they're basically just two models duct-taped together.
I find this particularly disturbing as if someone simply doing an editorial pass looking for spelling or grammar errors may not notice that the content appears fundamentally correct, but was actually altered.
I would rather wade through a sea of Tesseract's obvious typos than have to take on the much higher cognitive burden of making sure grammatically correct sentences weren't invented wholesale.