I just stumbled upon a good way to answer the question "are you afraid of AI?" from folks who are less technically inclined.

Ask them if they consider the daily weather report “artificial intelligence" and if they fear it.

Same thing: a bunch of real world inputs that yield a prediction that may be right or wrong. There may be a tornado - or maybe just cloudy.

Of course, if we place to much reliance on those predictions things can go haywire.

@SingingNala @chockenberry And some environments are easier to predict than others. Where I live in coastal Texas, for example, I assume no prediction beyond about 2-3 days is worth anything, as the weather around here just changes too fast. In order to truly predict such a thing I think you'd need a model on the scale of the entire globe or at least this hemisphere, otherwise you can't track fronts and other developments outside the area that sweep in and change things anyway. To say nothing of changes caused by us.
@SingingNala @chockenberry Those predictions told us that it was a clear and sunny day with barely any clouds *while* we were waiting in the line at the county fare and it was raining heavily.
@SingingNala @chockenberry Also a good representation of systemic bias. Would you expect a model that was carefully cultivated and trained on decades of data of the U.S. Midwest to give accurate, or even close, figures in the tropics? In Scandinavia? Of course not! Why? Because the training data is completely different from the reality it's being used for! Or even if a model were accurate to the current region, what about climate change? We don't have enough data on the actual effects of climate change on weather systems because it keeps slowly changing and we only really recognized it in the past decade or so. A model that assumes that climate is perfect and unchanging will obviously diverge farther and farther from the reality. And the only way to remedy that is to figure out a predictor that takes both the causes and effects of climate change into account and is far more precise than the current data we have to give to it.
@SingingNala @chockenberry A more contemporary Ai example of bias, unintentional or not, creeping into the system. I heard once of a camera that used AI in its autofocus and image enhancement features in the mid 2010s. It was developed in silicon valley, by silicon valley engineers. Many of whom are white men. As it approached public release, suddenly a black person ended up in front of it. And it couldn't figure them out. It had assumed that the general skin tone of *humans* is all the same, and so it had difficulty handling this case. The shapes were correct, this looks like a face, but without the skin tone correlation in its dataset it has a low confidence rating and likely would either jump erratically or just give up.