🚀 We've released a new version of DIANNA, our open-source #ExplainableAI (#XAI) tool designed to help researchers get insights into predictions of #DeepNeuralNetworks.

What's new:
👉improved dashboard
👉extensive documentation
👉added tutorials

MORE: https://www.esciencecenter.nl/news/new-release-of-escience-centers-explainable-ai-tool-dianna/

New release of eScience Center's explainable AI tool DIANNA   - eScience Center

DIANNA is an explainable AI (XAI) tool that helps researchers get insights into the predictions of Deep Neural Networks. New version released!

eScience Center

Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.

You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)

I can't remember who it was and can't find it.

#AI #DeepNeuralNetworks #NeuralNets #YOLO #deepdream

Last in the session was Park et al.'s "Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in #DeepNeuralNetworks", identifying stolen datasets even with different model architectures. (https://www.acsac.org/2023/program/final/s321.html) 4/4
#DNN #AI
ACSAC2023 Program – powered by OpenConf

With the success of #DeepNeuralNetworks in building #AI systems, one might wonder if #Bayesian models are no longer significant. New paper by Thomas Griffiths and colleagues argues the opposite: these approaches complement each other, creating new opportunities to use #Bayes to understand intelligent machines 🤖

📔 "Bayes in the age of intelligent machines", Griffiths et al. (2023)
🌍 https://arxiv.org/abs/2311.10206

#DNN #NeuronalNetworks

Bayes in the age of intelligent machines

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case, and that in fact these systems offer new opportunities for Bayesian modeling. Specifically, we argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.

arXiv.org

📣 Check out our new paper "DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications", oral at @aclmeeting by lead author Adam Ivankay this Wednesday!

We show adversarial attacks to #explainability methods for #DeepNeuralNetworks in technical text domains, propose a quantification of this problem, and initial solutions.

📊 Presentation: https://virtual2023.aclweb.org/paper_P1265.html
📄 Paper: https://arxiv.org/abs/2307.02094
💻 Code: https://github.com/ibm/domain-adaptive-attribution-robustness

#ACL2023 #ACL2023NLP #NLP #MachineLearning #AI

ACL2023: DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

Ein Philosoph und ein Neurologe wetteten um die Natur des Bewusstseins

Vor 25 Jahren schlossen Christof Koch und David Chalmers eine Wette ab, ob sich das Bewusstsein wissenschaftlich erklären lässt. Nun wurde der Sieger gekürt

DER STANDARD
I co-developed several new artificial neural network architectures with ChatGPT's help today. Muahahahaha! Yes novel new concepts turned into actual & actionable programming code. I realized that I'm going to have the first Self Aware Neural Network up and running before the end of 2023. #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness
An architecture that combines deep neural networks and vector-symbolic models

Researchers at IBM Research Zürich and ETH Zürich have recently created a new architecture that combines two of the most renowned artificial intelligence approaches, namely deep neural networks and vector-symbolic models. Their architecture, presented in Nature Machine Intelligence, could overcome the limitations of both these approaches, solving progressive matrices and other reasoning tasks more effectively.

Tech Xplore

Why #DeepNeuralNetworks need #Logic:

Nick Shea (#UCL/#Oxford) suggests

(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.

(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).

So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).

#CompSci #AI

Wow. In 24 hours, we have gone from zero to 4.4K followers, that‘s crazy. Thank you for a warm welcome and excellent tips. I gave up on replying to all of you after someone pointed out that I was spamming thousands of people – sorry! Also, please do not read too much into it if we do not respond or take a long time responding, we are a busy bunch and may simply sometimes miss your post or messages. Mastodon allows long posts so I am taking advantage of that, so here are a few things that you may – or may not – want to know.

—Who are we?—

Research in the Icelandic Vision Lab (https://visionlab.is) focuses on all things visual, with a major emphasis on higher-level or “cognitive” aspects of visual perception. It is co-run by five Principal Investigators: Árni Gunnar Ásgeirsson, Sabrina Hansmann-Roth, Árni Kristjánsson, Inga María Ólafsdóttir, and Heida Maria Sigurdardottir. Here on Mastodon, you will most likely be interacting with me – Heida – but other PIs and potentially other lab members (https://visionlab.is/people) may occasionally also post here as this is a joint account. If our posts are stupid and/or annoying, I will however almost surely be responsible!

—What do we do?—

Current and/or past research at IVL has looked at several visual processes, including #VisualAttention , #EyeMovements , #ObjectPerception , #FacePerception , #VisualMemory , #VisualStatistics , and the role of #Experience / #Learning effects in #VisualPerception . Some of our work concerns the basic properties of the workings of the typical adult #VisualSystem . We have also studied the perceptual capabilities of several unique populations, including children, synesthetes, professional athletes, people with anxiety disorders, blind people, and dyslexic readers. We focus on #BehavioralMethods but also make use of other techniques including #Electrophysiology , #EyeTracking , and #DeepNeuralNetworks

—Why are we here?—

We are mostly here to interact with other researchers in our field, including graduate students, postdoctoral researchers, and principal investigators. This means that our activity on Mastodon may sometimes be quite niche. This can include boosting posts from others on research papers, conferences, or work opportunities in specialized fields, partaking in discussions on debates in our field, data analysis, or the scientific review process. Science communication and outreach are hugely important, but this account is not about that as such. So we take no offence if that means that you will unfollow us, that is perfectly alright :)

—But will there still sometimes be stupid memes as promised?—

Yes. They may or may not be funny, but they will be stupid.

#VisionScience #CognitivePsychology #CognitiveScience #CognitiveNeuroscience #StupidMemes

Icelandic Vision Lab