Do you have a recommendation for a #CloudComputing provider (with #GPU, suitable for training #deepNeuralNetworks)? We are looking for options with a maximum of:
- #GreenIT, low CO2 footprint, #sustainability
- #DataPrivacy
Do you have a recommendation for a #CloudComputing provider (with #GPU, suitable for training #deepNeuralNetworks)? We are looking for options with a maximum of:
- #GreenIT, low CO2 footprint, #sustainability
- #DataPrivacy
🚀 We've released a new version of DIANNA, our open-source #ExplainableAI (#XAI) tool designed to help researchers get insights into predictions of #DeepNeuralNetworks.
What's new:
👉improved dashboard
👉extensive documentation
👉added tutorials
MORE: https://www.esciencecenter.nl/news/new-release-of-escience-centers-explainable-ai-tool-dianna/
Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.
You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)
I can't remember who it was and can't find it.
With the success of #DeepNeuralNetworks in building #AI systems, one might wonder if #Bayesian models are no longer significant. New paper by Thomas Griffiths and colleagues argues the opposite: these approaches complement each other, creating new opportunities to use #Bayes to understand intelligent machines 🤖
📔 "Bayes in the age of intelligent machines", Griffiths et al. (2023)
🌍 https://arxiv.org/abs/2311.10206
The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case, and that in fact these systems offer new opportunities for Bayesian modeling. Specifically, we argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.
Noch verstehen wir das Bewusstsein nicht.
Referenced link: https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Discuss on https://discu.eu/q/https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Originally posted by Phys.org / @physorg_com: http://nitter.platypush.tech/TechXplore_com/status/1641437632524697602#m
RT by @physorg_com: An #architecture that combines #deepneuralnetworks and vector-symbolic models @NatMachIntell https://www.nature.com/articles/s42256-023-00630-8 https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Researchers at IBM Research Zürich and ETH Zürich have recently created a new architecture that combines two of the most renowned artificial intelligence approaches, namely deep neural networks and vector-symbolic models. Their architecture, presented in Nature Machine Intelligence, could overcome the limitations of both these approaches, solving progressive matrices and other reasoning tasks more effectively.
Why #DeepNeuralNetworks need #Logic:
Nick Shea (#UCL/#Oxford) suggests
(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.
(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).
So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).