Had a lot of fun with my stats students today. I gave them two data sets. One from a random number generator, the other was one I made up that was not random, but designed to look random. They were able to figure out which one was fake.

Then we had ChatGPT make the same kind of data set (random numbers 1-6 set of 100) and it had the same problems as my fake set but in a different way.

We talked about the study about AI generated passwords.

There is something very creepy about the way LLMs willy cheerfully give lists of "random" numbers. But they aren't random in frequency, and as my students pointed out "it's probably from some webpage about how to generate random numbers"

But even then, why is the frequency so unnaturally regular? Is that an artifact from mixing lists of real random numbers together?

The LLM is like a little box of computer horrors that we peer into from time to time.

I'm sorry but the whole interface is just so silly.

You ask for random numbers with sentences and it pretends to give them to you? What are we doooooing?

@futurebird glorifying statistical models? It's just #marketing.

Most of the "godfathers of #AI" are cashing in, but also think that the idea of LLM's leading to #AGI is laughable.

But let's us, Scooby & The Gang rip off the monsters mask... (1/2)

It was Old Man Surveillance Capitalism all along!? Who knew...?

We all did. Come, on let's not kid yourselves.

The Vectors of Intent which Drive the Pursuit of Large Language Models

I am interested in modeling the context within which large language models, or so-called artificial intelligence, is being developed.

John’s Substack