it’s unfortunate that so many of the voices concerned about the adoption of AI have been so singularly focussed on “hype” as the main problem and pushing back against “hype” as a core strategy, because that hasn’t helped us deal with the very real, and in many cases very destructive, transformative effects the technology has actually been having
the negative transformative effects that AI is currently having stem as much from the things these systems can do well as they do from the things that they can’t, and focussing exclusively on the “can’t” has, I think, been counter-productive to anticipating threats, countering them or adapting to them
unfortunately, one of the things current AI systems can do extremely well is increase the scale of responding. That already inherently limits the efficacy of a strategy focussed primarily on dissuading people from using these systems, because comparatively few actors can have very large impact
@UlrikeHahn thank you for this. It puts into words really well something I’ve been trying to find the words for

@UlrikeHahn Hype is not only exaggerating what AI can do, but also creating FOMO and thus encouraging premature adoption. And that in turn blocks our collective capacity to develop judgement about what AI is good and bad for.

So we should really not dissuade people from using AI, but require them to go slowly and collect feedback before intensifying AI usage. And then consider everyone who goes fast as a rogue actor rather than a pioneer.

@khinsen I think, in general, the lack of any meaningful distance between testing and actual use in the wild is a good part of why we are where we are right now.

The nature (scale) of the systems in question systematically moved AI research from academia to industry, which effectively eliminated one step between development and real-world deployment that many other technological/scientific ‘products’ have faced, and the fact that we grant commercial interests in the technology space the ability to just roll out stuff at scale and see how it goes (move fast and break things) in ways that we would never do for more traditional sectors just compounds that structural feature

@UlrikeHahn It's not even just "roll out at scale and see how it goes". Even when it goes manifestly wrong (e.g. Grok generating nude pictures, ChatGPT recommending suicide), nothing is done seriously to stop this. I suspect that this fuels the inevitability line of thinking as well.
@khinsen that’s an important point!

@khinsen I think also that your point is really useful with respect to highlighting the difference between someone testing something and reporting on that test versus someone encouraging actual widespread adoption.

My impression is that there has been a tendency to equate the former with the latter particularly if the report isn't overwhelmingly negative and that's just not helpful for understanding how these systems might or might not end up being used.

@UlrikeHahn Right, we have reached a point of polarization of the discourse where effective communication is no longer possible.