#ModelExplainability, #DataLineage, and editing the #TrainingData set are topics that will be in the news next year…assuming we make it.
https://social.lol/@rom/112543674749743641
Oh 2 ten (@[email protected])

Here we go - copyright in the US, privacy in the EU. What is next? https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

social.lol

The reason the billionaires, governments, and corporations are trying to make Generative “#AI” fetch happen is because #ModelExplainability doesn’t exist.

They don’t want to foment transparency, accountability, or individual agency.

They want the opposite.

That is why #SparkleSickness ✨is everywhere.

What Is Black Box AI? | Built In

Black box AI is a term used to describe artificial intelligence systems whose internal workings and decision-making processes are not transparent.

Built In
@hosford42 @marjolica @SmallOther @russellmcormond @argv_minus_one @Uair
From what I gather, #ModelExplainability is both extremely valuable and also not close.

@mjausson @ai6yr @alienghic

The PhDs in machine learning are still decades away from #ModelExplainability when it comes to GenAI.

You’d think a lack of provenance for decision-making in regulated ecosystems would be a disincentive, but, no, they will FAFO, and FAFO hard.

@jgordon

“we still don’t understand why deep learning works, in any way that would let us predict which capabilities will emerge at which scale”

This part about the lack of #ModelExplainability gets glossed over on the #AIhype train.

@neurobashing @tsturm
Is the math well understood, though?

From what I understood the researchers are not completely 100% certain how they work or why they work the way they do.

We’re not necessarily within spitting distance of #ModelExplainability, are we?

I could be wrong? 🤷🏻‍♂️

@AlexJimenez I suspect that “#modelexplainability” relative to #consumerharm will be a thing.