Man I really feel bad for all of the people who’ve been negatively affected by AI rollouts. What it’s done just for deaf people trying to watch shows is horrible. I’ve been watching a lot of stuff on Netflix, Hulu, Max, etc., with the subtitles on and NONE of it matches the actually conversations actually occurring on screen. So deaf people have gone from having accurate transcriptions to AI hallucinated transcriptions where as much as 60% of the words are incorrect (yes I did the math on dubbed versions of Netflix gundam seed - not the whole show just random samplings and averaged it - I know, not perfect but I don’t have that sort of time). The point being that we went from a working system to a completely broken one. Testing this sort of thing in a way where the negative results of the testing adversely and disproportionately have effects on disabled people is just unbelievable. That’s not even mentioning the more directly horrific impacts of AI rollouts like predictive policing that’s entirely based on biased crime data, fraud monitoring in banks which disproportionately affect people who spend less money, and now it looks like increased AI use in benefit disbursement.
About a week ago I dealt with some of this. I had to call three times and spend 3 hours on the phone in total - all because my banks automated ai fraud system flagged my purchase of a laptop as fraud, the link they sent me had a broken HSTS implementation so I couldn’t access the fraud verification link the sent me lol.
I personally have a small AI company that I’ve run for almost two years. We make software that monitors warehouses for workers that are using poor posture to life heavy items. We made it in an effort to try and decrease workplace injuries of warehouse workers. From the beginning we also understood this sort of software could be used to terrorize employees, that companies could offer disincentives to employees if they got alerts, and that it could just generally be shitty for those working. Due to all of that, we did everything we could to ensure it couldn’t and wouldn’t be used in that fashion. We even offered a discount to employers that provide a positive incentive program instead of disincentive program. We used what control we did have to try and ensure the moral use of our product to what extent we could. Too many AI companies are in a race to the bottom and the potential effects on society are incalculable. I’m all for AI and it can do some amazing things but it’s ultimately our responsibility to roll it out in a way where it will do the least harm. Guardrails on LLMs and everything else being an absolute free for all is not enough.