Anyone noticed that when folks get even a little specific about the supposed benefits of "AI" the call outs are frequently "curing disease" (sometimes specifically cancer and/or Alzheimer's) and "addressing climate change"? I'm hearing it enough to wonder where these talking points come from.

>>

It's part and parcel of that annoying trope of having to pay lip-service to the supposed up-sides of "AI" even in reporting about the actual (documented, really happening) harms & risks of further harm.

>>

I think the case for "AI" helping with medical research might come from the work on protein folding--crucially a supervised machine learning project with extremely high quality data behind it--but I've never even seen the case re climate change expanded.

>>

Maybe it's just that these are common (reasonable) fears---that we, our loved ones, and our environment face disease & death?
@emilymbender I’m a fan of your work, and I agree with most of the ethical criticisms that the social sciences are offering, but I think you might be underestimating the uses that we’re already beginning to see in public administration. For organizing vast amounts of text and documents (f.ex. entity recognition and so on), LLM-powered applications are showing superb potential. This might, however, be a very Scandinavian view of mine.
@adoranten @emilymbender
Using AI in public administration is also one of the more obvious opportunities for harm...
@sabik @adoranten @emilymbender 👍 #AIQ=0 AI is just a tool - when it comes to value judgements about the outputs from AI (ie a computer program) a shovel would be just as smart as the AI program. The only difference between the 2 is that the shovel cant do addition and subtraction. But then the AI can’t dig a hole. #Robodebt