you can tell a paper summary is LLM-generated because it ends with superfluous junk like "[Result] is introduced, highlighting its promising potential for future research and the dataset's impact on revolutionizing [field]."

It's surprisingly hard to get the model to curb its enthusiasm

for example I told it not to say results were "impressive" and it suggested "transformative" instead 🤦‍♂️
@danyoel Funny, I had a similar thought today when the Google search “AI Overview” called a certain indentation practice “crucial” — when really it’s just good style. I wonder if there’s a good metric for assessing how confident/emphatic/dramatic a chunk of text is. Applying that to LLM outputs vs. natural text samples seems like a paper begging to be written.
@antimattr Crucial Indentation Style is my next prog rock band name