Algorithms are not magic. They are neither good nor evil, but they also aren't neutral. They are just math recipes written by regular ol' dumb human beings, and so they don't absolve us of our responsibilities as human beings. If anything, they amplify them. If you make a bad batch of enchiladas, that's really just a problem for you. If you program a computer to make unlimited batches of bad enchiladas, well now you are assaulting the entire world—the whole enchilada, as it were.
@theropologist More like allgowronghms, amirite‽
@Neat_hot @theropologist I think I have met my stretch goals for pun consumption upon reading this.

@theropologist
I work in an algorithm heavy industry. The majority of executives do not understand what an algorithm is, let alone how they work. They just like that they get the answers they want without having to do the hard work needed to figure out the why and how.

They are mathematical ginned down answers to Human questions. If you ask the program a flawed question, you’ll still get an answer, just not the correct one.
Garbage in, garbage out as they say.

@Magooish @theropologist Seconded. Had a boss once who had to ask if accuracy was a good thing.

Most of the time, throughout my career, they really just wanted the ugly truths to be swept under the rug as quickly and efficiently as possible.

@hosford42 @Magooish @theropologist True.

They also don’t want anyone to know how utterly unqualified and unprepared they are to be making the day to day decisions of the corporation. It’s unicorn adjacent to actually find an executive who knows how to lead while interpreting data to assist in creating good ethical paths forward - and to ask hard questions about how data was collected.

Professional ethics not a thing for most people in the nose bleed seats.

@Magooish @theropologist I'm of the opinion that a truly good leader doesn't have to be able to interpret data. They just need to be good at identifying people who can be trusted with that, and then not only giving those people the responsibility to handle it, but ensuring their voices are heard. (Same goes for any other technical skill important to the organization, but especially data.)
@hosford42 @Magooish @theropologist I won’t disagree , but often no questions get asked about how that data was collected and rather it has gone thru enough due diligence. That’s what I mean when I say understanding data. They don’t ever wonder if any of it could be wrong.
Math is great, but it is a simple tool. Leadership and decision making is hard because it means you have to be curious about how things happen, what’s not apparent in the math, and make hard choices.
@theropologist The thing about all this AI stuff is that all the “AI” is “algorithms” from a few years ago, they are just souped-up algorithms, and people/institutions are doing the same bad things with them, like feeding them biased data.
@MisuseCase And most AI algorithms aren't even really algorithmic in the traditional sense. They come at the problem sideways. Instead of following a recipe step by step to make enchiladas, they rifle through a huge pile of garbage, trying to cobble together something that they think is close enough to what they think you think an enchilada is. There are so many more opportunities for introducing bias without even realizing you're doing it.

@theropologist

Maybe as part of STEM education for people learning how to make algorithms we should make some kind of specialized classes about ethics, human interaction and the complexities of the human condition.

Lets call it, Human Science and we can add it to the STEM curriculum. Maybe classes could be focused on covering works of literature, or philosophy and involve challenging students to take the perspectives of people and understand them in a way they might not otherwise ever do.

@theropologist

We just need a prominent techbro billionaire to champion this and people will love it.

I actually think Bill Gates would be perfect for this, he already "well akshually"-ied the entire medical science industry so why not the humanities in education?

He already tried to ruin education before with this weird teacher rating system that nobody actually wanted except him but everybody pretended to because he funded everything.

@theropologist Then there's machine learning, in which algorithms write themselves, and they're too complex to be fully understood by human beings. In cases where the ML models just learn by trial and error, trying to score as many points as usual, that's not a big problem: Either the artificial neural network manages to keep the robot riding the bicycle, or it crashes. Either the model can play a game or it cannot.
The problem arises when the model is trained to mimic humans and learn from us.
@theropologist It isn't enough to remove all the obvious biases, all the overt racism, sexism, homophobia, fascism, etc., from the training data, because any sufficiently complex ML system will also pick up all the hidden biases as well, it will read between the lines and reproduce things that aren't explicit in the data but implied.