it’s unfortunate that so many of the voices concerned about the adoption of AI have been so singularly focussed on “hype” as the main problem and pushing back against “hype” as a core strategy, because that hasn’t helped us deal with the very real, and in many cases very destructive, transformative effects the technology has actually been having
the negative transformative effects that AI is currently having stem as much from the things these systems can do well as they do from the things that they can’t, and focussing exclusively on the “can’t” has, I think, been counter-productive to anticipating threats, countering them or adapting to them
unfortunately, one of the things current AI systems can do extremely well is increase the scale of responding. That already inherently limits the efficacy of a strategy focussed primarily on dissuading people from using these systems, because comparatively few actors can have very large impact

@UlrikeHahn Hype is not only exaggerating what AI can do, but also creating FOMO and thus encouraging premature adoption. And that in turn blocks our collective capacity to develop judgement about what AI is good and bad for.

So we should really not dissuade people from using AI, but require them to go slowly and collect feedback before intensifying AI usage. And then consider everyone who goes fast as a rogue actor rather than a pioneer.

@khinsen I think also that your point is really useful with respect to highlighting the difference between someone testing something and reporting on that test versus someone encouraging actual widespread adoption.

My impression is that there has been a tendency to equate the former with the latter particularly if the report isn't overwhelmingly negative and that's just not helpful for understanding how these systems might or might not end up being used.

@UlrikeHahn Right, we have reached a point of polarization of the discourse where effective communication is no longer possible.