If you've been using an LLM to summarise documents for you, I highly recommend trying it on a document that you wrote.
@jasongorman ...and then you find out the flaws 🙂 , i do it all the time, i think NotebookLM does a good job by the way, at least in provding overview and a quick summary.
@jasongorman
That's actually a good idea regardless of one's bent on the subject. Test for fitness for pupose - *who* does that?
@gueuledatmosphere @jasongorman uh I do … that’s why I discovered its true purpose: Alleviate writer’s block
@jasongorman
It's going to be like Google translate and then translate back again isn't it?
@simon_brooke
@jasongorman
A LLM is a #LetterByLetterMachine !
It did not know that it should answer questions, make suggestions or what stupid people believe it could do.
And, to the big surprise of a lot user, it could not calculate 🤣
It just sets one letter after the previous one, to write a text that sound correctly in the used language, without any „knowledge“ or interest in the content of the text.
@jasongorman how about, just hear me out, how about we would have summary fields for documents and people would spend 20 mins summarising as best they can so strangers would be well informed about their work.
And if you can't make that summary, it is hard to imagine anyone caring to read it.
@peteriskrisjanis For every conceivable audience? That's much more writing than the original text.
@jasongorman LLMs are entirely the wrong tech for summarization. LLMs produce "average text". A useful summarization tells you what deviates from average
@jasongorman sadly the same normally applies if you read news reports (by real journalists) on a topic you have expertise in.
If humans can't get it right why assume LLMs will?
@tgent_fens I've had some dealings with the press/media, and I can confirm that they'd already written the story before they interviewed me. They were just looking for quotes or soundbites to support their narrative.
@jasongorman sounds like something an LLM might do. It's like they're almost human. 🤔
@tgent_fens It's apparently true that the "train of thought" models backfill