Here is a nice paper on how to #identify #winning strategies using #explainable #MachineLearning in #TeamSports, with a special emphasis on #rugby.

A machine learning and explainability-driven methodology for identifying winning strategies in Rugby Union

https://www.sciencedirect.com/science/article/pii/S2772662225000244

Explainable AI: Thinking Like a Machine - Towards AI

Everyone knows AI is experiencing an explosion of media coverage, research, and public focus. It is also garnering massive popularity in organizations and enterprises, with every corner of every…

Towards AI
Explainable AI: Thinking Like a Machine - Towards AI

Everyone knows AI is experiencing an explosion of media coverage, research, and public focus. It is also garnering massive popularity in organizations and enterprises, with every corner of every…

Towards AI

When using #machinelearning for tasks in #geosciences, you should aim for #interpretability! Why this is the case and how to go about it is the topic of a brand-new article in the open access journal "Earth's Future" by Shijie Jiang and an interdisciplinary group of colleagues. Check it out!

https://doi.org/10.1029/2024EF004540

#Explainable #AI #XAI

Happy to announce our latest #preprint short paper 📝:
"Towards #eXplainable #AI for #MobilityDataScience"
https://arxiv.org/abs/2307.08461

I'd love to hear your thoughts and experiences with #XAI in #SpatialDataScience

#GISchat

Towards eXplainable AI for Mobility Data Science

This paper presents our ongoing work towards XAI for Mobility Data Science applications, focusing on explainable models that can learn from dense trajectory data, such as GPS tracks of vehicles and vessels using temporal graph neural networks (GNNs) and counterfactuals. We review the existing GeoXAI studies, argue the need for comprehensible explanations with human-centered approaches, and outline a research path toward XAI for Mobility Data Science.

arXiv.org
@AmiW Although the message is clear, I think that it would not have needed such a "#provocation" / somehow different, "compatible", could have been implemented satisfactorily. Photography should also as art still remain (technically) "#explainable". Especially in times where we are already overchallenged with clumsy falsifications by #AI. I find myself abused in my sensations, at least "used", when I see something different than I expect.

Kuka kontrolloi ketä? -kysytään eilen ilmestyneessä EaIT-jounaalin artikkelissa liittyen tekoälyyn turvallisuusalalla, https://link.springer.com/article/10.1007/s10676-023-09686-x.

Tämä toisena esimerkkinä kehumastani tuoreudesta ja ajankohtaisuudesta SEP-entryn OIR-linkeissä. "Merkittävän inhimillisen kontrollin" pohdinnat yleistynevät laajemminkin, vaikka esimerkkinä oli vain pommidroonin AI-avusteinen pudottaminen.

Asioita tutkitaan siksi, että ei kuvitella vielä tiedettävän.

#tekoäly #AI #security #research #explainable

Who is controlling whom? Reframing “meaningful human control” of AI systems in security - Ethics and Information Technology

Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human control” of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.

SpringerLink
“RRG has a large number of diverse human languages used to develop and improve its theory, going back to the 1980s — and a huge number of research papers supporting its model across languages and over time. ” — @[email protected] https://link.medium.com/DPctRvRiQvb #linguistics #explainable #nobias

#AI doesn't have to be a black-box. There is a real need to make it #explainable. I can only see this getting bigger as the time comes. #INNOQ. Article in german.

https://www.innoq.com/de/articles/2022/12/ki-systeme-mlops-model-governance-explainable-ai/

KI-Systeme: MLOps, Model Governance und Explainable AI sichern robusten Einsatz

Compliance und Vertrauen: Mit den richtigen Tools und Prozessen lassen sich KI-Systeme wirksam kontrollieren und im Einklang mit rechtlichen Vorgaben betreiben.

Fair and Explainable Machine Learning

Application of Machine Learning in ambits such as medicine, finance, and education is still nowadays quite complicated due to the ethical concerns surrounding the use of algorithms as automatic decision-making tools. Two of the main causes at the root of this mistrust are: bias and low explainability. In this article,...

Open Data Science - Your News Source for AI, Machine Learning & more