I've been listening and reading on AI, and it strikes me how most of the real questions around artificial intelligence are far from new, and not technological at all.

So I laughed really hard at this story of a dog who, in 1908, was rewarded with steak for saving a child who'd fallen into the Seine. Naturally, he started pushing kids in to rescue them to get more steak. Much like a bot optimizes to its parameters. (the fancy term is "principal-agent" problem). https://www.nytimes.com/1908/02/02/archives/dog-a-fake-hero-pushes-children-into-the-seine-to-rescue-them-and.html

DOG A FAKE HERO.; Pushes Children Into the Seine to Rescue Them and Win Beefsteaks.

The New York Times

@steveolson Which is not only an issue with AI or dogs. It is a general issue with any metric that is used as an incentive.

Whether that is the function between rescued kids and steaks or solved issues and salary...

Terry Pratchett described that nicely in one episode where the city paid a dollar per rat-tail to get a grip on the rat plague. Some time later they started taxing rat-farms....

@heiglandreas heh. yeah, i like I said, that's what kind of dawned on me... that the new-fangled AI questions really are just age-old problems we have to apply in a new context, albeit one that could potentially kill us all in an attempt to meet its parameters.