200 Followers
292 Following
143 Posts
Researching Trustworthy AI at TU Delft & ING. Counterfactual Explanations and Probabilistic ML. Tools: Julia , Quarto

📜 Recent: https://arxiv.org/abs/2312.10648
🌐 Blog: https://www.paltmeyer.com/blog/
📦 Julia: https://www.paltmeyer.com/content/software.html
Homehttps://www.patalt.org/
githubhttps://github.com/pat-alt

To address this need, CounterfactualExplanations.jl now has support for Trees for Counterfactual Rule Explanations (T-CREx), the most novel and performant approach of its kind, proposed by Tom Bewley and colleagues in their recent hashtag #ICML2024 paper: https://proceedings.mlr.press/v235/bewley24a.html

Check out our latest blog post to find out how you can use T-CREx to explain opaque machine learning models in hashtag #Julia: https://www.taija.org/blog/posts/counterfactual-rule-explanations/

Counterfactual Metarules for Local and Global Recourse

We introduce <b>T-CREx</b>, a novel model-agnostic method for local and global counterfactual explanation (CE), which summarises recourse options for both individuals and groups in the form of gene...

PMLR
When we are primarily interested in explaining the general behavior of opaque models, however, local explanations may not be ideal. Instead, we may be more interested in group-level or global explanations.
Counterfactual Explanations are typically local in nature: they explain how the features of a single sample or individual need to change to produce a different model prediction. This type of explanation is useful, especially when opaque models are deployed to make decisions that affect individuals, who have a right to an explanation (in the EU).

Something's been cooking this week at [CounterfactualExplanations.jl](https://github.com/JuliaTrustworthyAI/CounterfactualExplanations.jl) ...

One of my favorite papers @ICMLConf this year proposes a new model-agnostic approach for generating global and local counterfactual explanations through surrogate decision trees: https://arxiv.org/abs/2405.18875

Will be shipped with next release 

GitHub - JuliaTrustworthyAI/CounterfactualExplanations.jl: A package for Counterfactual Explanations and Algorithmic Recourse in Julia.

A package for Counterfactual Explanations and Algorithmic Recourse in Julia. - JuliaTrustworthyAI/CounterfactualExplanations.jl

GitHub

Me: Well ... yes, of course, my PhD has practical value.

The practical value:

```julia
"""
issubrule(rule, otherrule)

Checks if the `rule` hyperrectangle is a subset of the `otherrule` hyperrectangle. $DOC_TCREx
"""
function issubrule(rule, otherrule)
return all([x[1] <= y[1] && x[2] <= y[2] for (x, y) in zip(rule, otherrule)])
end
```

Taija, the organization for Trustworthy AI in Julia , has its own website and blog now: https://www.taija.org/

Lots of interesting stuff upcoming, including blog posts from our two Google Summer of Code/Julia Season of Contributions students.

We'll use the blog to share any relevant updates, such as the recent release of a small new package for sampling from model distributions: https://www.taija.org/blog/posts/new-package-energysamplers/

Stay tuned!

Taija

The next time someone tells you some tech is "inevitable" please laugh directly in their face. And then tell them that's been used as an excuse for exploitation forever, it's a red flag, and if they were smart, they'd avoid it, well, like the plague. But we know how well that's going.

https://arxiv.org/abs/2408.08778

@davidthewid @histoftech

Watching the Generative AI Hype Bubble Deflate

Only a few short months ago, Generative AI was sold to us as inevitable by the leadership of AI companies, those who partnered with them, and venture capitalists. As certain elements of the media promoted and amplified these claims, public discourse online buzzed with what each new beta release could be made to do with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some businesses added "AI" to their names to juice their stock prices, and companies talking about "AI" on their earnings calls saw similar increases. While the Generative AI hype bubble is now slowly deflating, its harmful effects will last.

arXiv.org

AI models collapse when trained on recursively generated data.

#machinelearning

https://www.nature.com/articles/s41586-024-07566-y

AI models collapse when trained on recursively generated data - Nature

 Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.

Nature
Shocking! Against all odds, it turns out that scale is not all you need. If you spot a confused tech bro, give them a hug!
Good to know I landed at the right airport @ICMLConf 😂