@sergedroz I can take my personal example.
My M.Sc thesis was on AI applied to intrusion detection and I wrote several papers on the topic too. Back in the day (talking of 2009-2010) AI was still trying to emerge back from its “expert systems” winter through academia. There was a ferment of ideas and a lot of genuinely good intentions to build models that helped folks solve real problems. Nobody who worked like me on AI applied to computer security (or speech recognition, mood analysis, climate forecast models etc.) ever remotely thought of negative ethical implications of our work in a far future.
Fast-forward 7-8 years, and I started working on building the first large-scale models to detect things like partner fraud or data anomalies. And I managed to deploy those models in products used by our agents, after sitting next to them for many days in their day-to-day work to understand their pain points, making it very clear that those tools were supposed to augment their work rather than replacing it, and they were expected to worry more about visiting partners and talking to them than checking duplicate pictures on search engines or the correctness of addresses. It was a success in terms of productivity, agents loved it, and it didn’t result in any layoffs. To me the AI ethical problem at this point was a kind of solved problem. I believed that, if AI was built in good faith, in order to augment rather than replace human skills, and by listening closely to the needs of all of their users instead of coming with solutionism from above, then it was possible to build fair models.
Fast-forward another couple of years, and I wrote a book about computer vision models and how to train them on cheap devices, including Raspberry Pis, using off-the-shelf cameras. I showed how to train model for motion detection and tracking and face recognition even with low budget. The book sold quite well and a few weeks later I got a request for an interview from a researcher who worked in the field of ethical AI who was interviewing several technologists to understand their awareness about the ethical impact of their works. I talked excitedly to him about the use-cases of my AI platform, how it could run even on an RPi, how efficiently I trimmed every cycle of CPU on the convolutional layers and how I built some general purpose APIs around Tensorflow, but he wasn’t much interested in that. Instead, he asked me about how I would react if my software was used for racial profiling, mass surveillance or processing of unauthorized police footage, and how I would prevent that from happening. To me those questions came literally out of the blue at that time (it was shortly before Timnit Gebru was fired from Google, and before the whole topic of conflicts between product and ethical teams in AI surfaced). I felt like a manufacturer of calculators who is told out of the blue that his devices could also be used to calculate trajectories of ballistic missiles. I was like “but I only worked on this as a hobby project, it runs on my RPis to turn appliances on an off depending on who walks into the room and give customized greetings…how can it ever hurt anyone?”
Heck if we were naive.
Of course I would give that interviewer very different answers if he were to interview me now.
All of this to make a simple point: I consider myself to be quite politically and socially active for the average engineer, I could figure out a lot of ways in my mind that AI could go wrong and deliberately tried to avoid those pitfalls when training and deploying models, and still when working on AI there are so many things that could go wrong and completely slipped my mind. And btw I didn’t even contribute that much to the field - sure, I did some cool projects and deployed them in the real world, but it’s not like I had a crucial impact on transformer architectures or convolutional neural networks.
Now imagine engineers that are less politically/socially active than me, and who are probably smarter than me and did big contributions to the models deployed by the likes of OpenAI and Google. Many of them are still in the same state of naiveness where I was a decade ago or so. Many are still laser-focused on the exciting geek side of their job, on building things at the edge of the human capabilities, and fail to even see how the things that they build can be misused - or maybe they see it, but they feel like they are acceptable prices to pay for the progress of humanity, or maybe they’re more cynical and they think that they can just make enough money in their jobs to jump off the boat and retire when AI comes after their jobs. And I can tell you that there are also many of us who feel genuinely betrayed and cheated, some of us who really wanted to build robots that helped humanity, and instead ended up with chatbots seized by MBAs who just want to get rid of all white collar jobs, and just package AI into their existing streams of recurring revenue.
But bridges have to be built if you want to have impact. At the very least, any company working on AI should be compelled to have an AI ethics department. And it should be their job to create those bridges between technologists and specialists in the sectors where AI is going to be deployed in order to see all the possible pitfalls of those models. And it should be their job to ensure that any technologist who works on AI is given regular trainings on ethics, just like anyone committing code in production is given regular training on security. And of course regulation must exist to enforce that with big powers come big responsibilities, and the business who develop large scale models used in anything that has impact on large groups of people must be open to external scrutiny, and open up everything (model weights, training data, training code and unlimited API access) to external specialists from various fields to ensure that what those models return is fair and accurate.
@soulsource