I have deep concerns about the rise of large language models but am also interested in exploring concrete use cases on assumption they are here to stay.

One thing ChatGPT can do quite well is clean up messy table data. This is useful for historians working with complex tables in scanned historical documents. I offer an example in this post:

https://froginawell.net/frog/2023/03/cleaning-up-tables-from-primary-sources-in-chatgpt/

#chatgpt #LLMs #histodons #asianists

Cleaning Up Tables from Primary Sources in ChatGPT

I’ve been following with interest the debates around the rapid emergence of powerful large language models such as OpenAI’s ChatGPT, its Bing sibling Sydney, Meta’s Galactica, and…

Frog in a Well
@konrad Nice! But be super careful, it does make mistakes. In your example I can see it made 7,000 from 7,0000. I see a lot of transcription issues in the future introduced by this...
@lobidu very nice catch! The original transcription problem was me turning 7,000 into 7,0000 and ChatGPT “fixing” it rather than leaving the mistake the way it should! Indeed, there will need to be careful checking!
@lobidu thanks again, I updated the post to note the way that it dropped the digit. Stragely, it dropped it in the first reply but not when I prompted for an HTML version of the table! Fascinating.
@konrad That is useful. Ok, I am changing my mind about this.
@starluna there is still plenty of time and opportunities for this LLM stuff to turn into a massive disaster for society! In the meantime… 😜🤔