Listening to very smart people talk about #GPT4 I'm reminded of the joke about a checkers-playing dog.

A guy has a dog that plays checkers. "My goodness," everyone says, "that's amazing. What a brilliant dog!"

"Not really," he replies, "I beat him four games out of five."

That's GPT4. It's capacities are amazing and completely unexpected.

But it's also so limited. You shouldn't back the dog in a checkers tournament, and you shouldn't use an LLM as a medical assistant or in many other ways.

I feel like this joke captures a great deal of the dialectic that I see here and in the broader discussion around LLMs and AI right now.

I spend most of my time writing about how baffled I am to watch Microsoft and Google betting their futures—and to a degree, ours—on this dog's performance in the World Checkers Championship.

Other colleagues are legitimately amazed that the dog can play chess at all, and want to understand how well it plays, and how it manages to do it in the first place.

(Here the metaphor really strains, but my other big concern is what happens to the game of checkers to which everyone on the planet has been addicted by design, when all of a suddenly everyone has a dozen of these dogs of their own, and checkers-playing has already been monetized by 20 years of surveillance capitalism, and then you throw in a handful of bad actors that want to see everything burn.)
@ct_bergstrom What if we took the deep data algorithms that already drive most large-scale software operations and made them chat bots.