680: A Lot of Holes in That Cheese
https://atp.fm/680

Our home screens, desktop audio setups, document-saving models, what we hope to “experience” next week, and where not to aim snow.

Accidental Tech Podcast: 680: A Lot of Holes in That Cheese

Three nerds discussing tech, Apple, programming, and loosely related matters.

@atpfm re the assertion that AI are experts.

Fundamentally the issue is that AI needs data, and good quality data. We have currently trained the current crop of AI on all of the internet, along with some additional bespoke sources. They are text predictors, not fact predictors.

Pre AI, a large corpus of the medical and financial advice online was simply wrong. Medical sources said everything was cancer. Financial sources had incredibly bad advice on dealing with debt.

1/

@atpfm ChatGPT doesn't have a way to determine what is true. It has a way to determine what is most likely to come next in the sequence. Given the corpus of non-programming data, a lot of what was on the internet before was bad information (that's why experts are useful).

For programming, we had large corpuses of good quality data (GitHub et al), and the advantage of being able to run experiments in silico (ie does it compile & work).

We can't do experiments like that in finance or medical 2/

@atpfm ... because the time horizons for finding out it doesn't work are much longer & the negative outcomes when you are wrong are much more severe than "my program didn't compile".

So we don't have a good way to train AI to get better faster in fields that have the same dynamics as medicine/finance without causing much harm.

Medicine & finance are so highly regulated because bad advice can sound good to the untrained (see trump saying injecting bleach might fight COVID) 3/

@atpfm taking a more simple example. Computers have been superhuman at chess for many many years now.

If you attempt to use the current generation of LLMs to play a game of chess against each other (or indeed against a player), they will spit out illegal moves, create pieces out of thin air, & generally just vomit hallucinations in relatively short order.

Again, this is a solved problem for computers, but the LLMs perform terribly. Now imagine this but in a field you know nothing about.

4/4