New article by Stephanie Pappas, featuring quotes from me and others about human values and biases in GPTs and "A.I." tools, as well as technology in general, and with reference to a recounting from Shannon Vallor.

One thing I'll clarify is that I didn't so much mean to comment on whether GPTs "understand," but more that their architectures and operations don't CARE whether the things they generates are true are not. They're not truth machines, they're confirmation bias engines.

And I think that latter part came through great, but I just wanted to be clear I'm not making any hard claims about the possibilities of knowledge and understanding, there.

Anyway, I think this was a pretty great piece (in which I really didn't expect to see myself so prominently featured) with lots of nuance. Very timely and will hopefully help a lot of people understand what's at stake in these systems.
https://www.livescience.com/technology/artificial-intelligence/ais-unsettling-rollout-is-exposing-its-flaws-how-concerned-should-we-be

AI's 'unsettling' rollout is exposing its flaws. How concerned should we be?

AI isn't close to becoming sentient, but it could be disruptive anyway.

Live Science
@Wolven Nice: confirmation bias machines.