Who Invented Deep Residual Learning?

Do you have a recommendation for a #CloudComputing provider (with #GPU, suitable for training #deepNeuralNetworks)? We are looking for options with a maximum of:
- #GreenIT, low CO2 footprint, #sustainability
- #DataPrivacy

#followerpower

🚀 We've released a new version of DIANNA, our open-source #ExplainableAI (#XAI) tool designed to help researchers get insights into predictions of #DeepNeuralNetworks.

What's new:
👉improved dashboard
👉extensive documentation
👉added tutorials

MORE: https://www.esciencecenter.nl/news/new-release-of-escience-centers-explainable-ai-tool-dianna/

New release of eScience Center's explainable AI tool DIANNA   - eScience Center

DIANNA is an explainable AI (XAI) tool that helps researchers get insights into the predictions of Deep Neural Networks. New version released!

eScience Center

Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.

You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)

I can't remember who it was and can't find it.

#AI #DeepNeuralNetworks #NeuralNets #YOLO #deepdream

Last in the session was Park et al.'s "Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in #DeepNeuralNetworks", identifying stolen datasets even with different model architectures. (https://www.acsac.org/2023/program/final/s321.html) 4/4
#DNN #AI
ACSAC2023 Program – powered by OpenConf

With the success of #DeepNeuralNetworks in building #AI systems, one might wonder if #Bayesian models are no longer significant. New paper by Thomas Griffiths and colleagues argues the opposite: these approaches complement each other, creating new opportunities to use #Bayes to understand intelligent machines 🤖

📔 "Bayes in the age of intelligent machines", Griffiths et al. (2023)
🌍 https://arxiv.org/abs/2311.10206

#DNN #NeuronalNetworks

Bayes in the age of intelligent machines

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case, and that in fact these systems offer new opportunities for Bayesian modeling. Specifically, we argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.

arXiv.org
Ein Philosoph und ein Neurologe wetteten um die Natur des Bewusstseins

Vor 25 Jahren schlossen Christof Koch und David Chalmers eine Wette ab, ob sich das Bewusstsein wissenschaftlich erklären lässt. Nun wurde der Sieger gekürt

DER STANDARD
I co-developed several new artificial neural network architectures with ChatGPT's help today. Muahahahaha! Yes novel new concepts turned into actual & actionable programming code. I realized that I'm going to have the first Self Aware Neural Network up and running before the end of 2023. #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness
An architecture that combines deep neural networks and vector-symbolic models

Researchers at IBM Research Zürich and ETH Zürich have recently created a new architecture that combines two of the most renowned artificial intelligence approaches, namely deep neural networks and vector-symbolic models. Their architecture, presented in Nature Machine Intelligence, could overcome the limitations of both these approaches, solving progressive matrices and other reasoning tasks more effectively.

Tech Xplore

Why #DeepNeuralNetworks need #Logic:

Nick Shea (#UCL/#Oxford) suggests

(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.

(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).

So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).

#CompSci #AI