Had a lot of fun with my stats students today. I gave them two data sets. One from a random number generator, the other was one I made up that was not random, but designed to look random. They were able to figure out which one was fake.

Then we had ChatGPT make the same kind of data set (random numbers 1-6 set of 100) and it had the same problems as my fake set but in a different way.

We talked about the study about AI generated passwords.

There is something very creepy about the way LLMs willy cheerfully give lists of "random" numbers. But they aren't random in frequency, and as my students pointed out "it's probably from some webpage about how to generate random numbers"

But even then, why is the frequency so unnaturally regular? Is that an artifact from mixing lists of real random numbers together?

The LLM is like a little box of computer horrors that we peer into from time to time.

I'm sorry but the whole interface is just so silly.

You ask for random numbers with sentences and it pretends to give them to you? What are we doooooing?

@futurebird I agree, I think that the interconnected network models underlying LLMs are very useful. Things like Alpha-fold, and some of the methods for generating possible compounds, or searching for data in huge datasets… but the jump to use LLMs as general intelligence seems silly.