theFutureOfCommunication
theFutureOfCommunication
I remember when compression was popularized, like mp3 and jpg, people would run experiments where they would convert lossy to lossy to lossy to lossy over and over and then share the final image, which was this overcooked nightmare
I wonder if a similar dynamic applies to the scenario presented in the comic with AI summarization and expansion of topics. Start with a few bullet points have it expand that to a paragraph or so, have it summarize it back down to bullet points, repeat 4-5 times, then see how far off you get from the original point.
In my experience, LLMs aren’t really that good at summarizing
It’s more like they can “rewrite more concisely” which is a bit different
If it isn’t accurate to the source material, it isn’t concise.
LLMs are good at reducing word count.

Never before seen pilot episode.Keywords: TypesIf you see any errors, please post about them in the comment section!Source code, papers, etc.: http://tom7.or...
translation party!
Throw Japanese into English into Japanese into English ad nauseum, untill an ‘equilibrium’ statement is reached.
… Which was quite often nowhere near the original statement, in either language… but at least the translation algorithm agreed with itself.
Gradually watermelon… I like shapes.
Twisted translations
Summarizing requires understanding what’s important, and LLMs don’t “understand” anything.
They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.