📜 Recent: https://arxiv.org/abs/2312.10648
🌐 Blog: https://www.paltmeyer.com/blog/
📦 Julia: https://www.paltmeyer.com/content/software.html
| Home | https://www.patalt.org/ |
| github | https://github.com/pat-alt |

| Home | https://www.patalt.org/ |
| github | https://github.com/pat-alt |
To address this need, CounterfactualExplanations.jl now has support for Trees for Counterfactual Rule Explanations (T-CREx), the most novel and performant approach of its kind, proposed by Tom Bewley and colleagues in their recent hashtag #ICML2024 paper: https://proceedings.mlr.press/v235/bewley24a.html
Check out our latest blog post to find out how you can use T-CREx to explain opaque machine learning models in hashtag #Julia: https://www.taija.org/blog/posts/counterfactual-rule-explanations/
Something's been cooking this week at [CounterfactualExplanations.jl](https://github.com/JuliaTrustworthyAI/CounterfactualExplanations.jl) ...
One of my favorite papers @ICMLConf this year proposes a new model-agnostic approach for generating global and local counterfactual explanations through surrogate decision trees: https://arxiv.org/abs/2405.18875
Will be shipped with next release 
Me: Well ... yes, of course, my PhD has practical value.
The practical value:
```julia
"""
issubrule(rule, otherrule)
Checks if the `rule` hyperrectangle is a subset of the `otherrule` hyperrectangle. $DOC_TCREx
"""
function issubrule(rule, otherrule)
return all([x[1] <= y[1] && x[2] <= y[2] for (x, y) in zip(rule, otherrule)])
end
```
Taija, the organization for Trustworthy AI in Julia
, has its own website and blog now: https://www.taija.org/
Lots of interesting stuff upcoming, including blog posts from our two Google Summer of Code/Julia Season of Contributions students.
We'll use the blog to share any relevant updates, such as the recent release of a small new package for sampling from model distributions: https://www.taija.org/blog/posts/new-package-energysamplers/
Stay tuned!
The next time someone tells you some tech is "inevitable" please laugh directly in their face. And then tell them that's been used as an excuse for exploitation forever, it's a red flag, and if they were smart, they'd avoid it, well, like the plague. But we know how well that's going.
Only a few short months ago, Generative AI was sold to us as inevitable by the leadership of AI companies, those who partnered with them, and venture capitalists. As certain elements of the media promoted and amplified these claims, public discourse online buzzed with what each new beta release could be made to do with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some businesses added "AI" to their names to juice their stock prices, and companies talking about "AI" on their earnings calls saw similar increases. While the Generative AI hype bubble is now slowly deflating, its harmful effects will last.
AI models collapse when trained on recursively generated data.

Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.