This profile of me in *The New Yorker* came out really well, if I do say so myself:

https://www.newyorker.com/culture/the-new-yorker-interview/cory-doctorow-wants-you-to-know-what-computers-can-and-cant-do

@pluralistic AI algorithms do work though... This sounds more like a bad case where an AI is being trained the wrong thing, but that's not the AI's fault for getting bad training data
@lovpilowu @pluralistic It's not only about training data, but also about the way the system is designed. We are not yet able to fully understand how modern "AI" makes decisions. If you want a proof of how such algorithms do *not* work, look for "Adversarial attacks" (ex. https://arxiv.org/pdf/1909.08072.pdf page 6). This is not a problem for a "photo album sorter", but for a high risk decision-making task it can be very dangerous to leave such an algorithm in charge.

@gsoc @pluralistic yeah but I don't think it's a problem with AI itself, I think it's a problem with people using it in areas where it just shouldn't be used yet

But I don't want to bash the technology and the development of it because people are doing dumb things with it

@lovpilowu @pluralistic Absolutely! It's not the technology that is a threat is how we decide to use it and, most importantly, to oversee it, as many other scientific discoveries. I quoted "AI" because it has a misleading name (AI algorithms are not intelligent) but that's it. We need to keep in mind that AI makes mistakes that humans would not make, so we need to draw a clear line between applications where it's ok to use it, where it isn't and where it *must* require oversight.