#Yoshua #Bengio, one of the creators of #AI: "The danger of #Artificial #Intelligence is that it imitates us, and if it knows that we don't want to die, it may also want to stay on"
#Yoshua #Bengio, one of the creators of #AI: "The danger of #Artificial #Intelligence is that it imitates us, and if it knows that we don't want to die, it may also want to stay on"
“Being #Jewish is a #religion of risk, but no one will stop us from going to #synagogue or from celebrating and commemorating our holidays as #Jews,” #EstrellaBengio, president of the Jewish Community of #Madrid, told #JNS on Wednesday.
#Bengio was attending the #YaelFoundation’s fourth annual education summit in #Vienna, which this year focused on resilience, innovation, leadership, Jewish identity and the challenges of modern education.
Lectures at the summit included, on Tuesday: “Positive Leadership: The Science of Happiness” by #TalBenShahar, examining the intersection of leadership, psychology and well-being, and on Wednesday, “Harnessing the Hacker Mindset” by #KerenElazari, linking cybersecurity, innovation and leadership, and “Igniting Excellence in the Next Generation,” outlining the vision of Yael Foundation CEO #ChayaYosovich.
Bengio described #Spain’s Jewish community—particularly in Madrid—as young, growing and vibrant."
https://www.jns.org/no-one-will-stop-us-madrid-jewish-leader-tells-jns/
Der Turing-Award-Gewinner Yoshua Bengio schließt sich dem Projekt Safeguarded AI an. Das setzt auf KI, um andere KIs zu kontrollieren. Doch diese Gatekeeper sind momentan noch rein hypothetisch.
#KI #Bengio #KünstlicheIntelligenz
Den ganzen Artikel findet ihr hier: https://t3n.de/news/nicht-in-den-abgrund-hineinfahren-so-will-ki-pionier-yoshua-bengio-ki-katastrophen-verhindern-1639755/
AI pioneer Yoshua Bengio joins UK's Safeguarded IT programme
Seeks to develop quantitative safety guarantees for AI
https://www.computing.co.uk/news/4344108/ai-pioneer-yoshua-bengio-joins-uks-safeguarded-programme
Is plagiarism inherent in AI's (lack of) ethics?
The case of Jürgen Schmidhuber's pioneering work on neural networks plagiarized by Bengio, Hinton, and LeCun, by Jürgen Schmidhuber:
https://people.idsia.ch/~juergen/ai-priority-disputes.html
Image credit: Jürgen Schmidhuber, with color touches by me, outlining clues contained in this highly informative (but probably intentionally left flat) sketch by the author.
A ‘Godfather of AI’ Calls for an Organization to Defend Humanity
Yoshua #Bengio’s pioneering research helped bring about #ChatGPT and the current AI boom. Now he’s worried #AI could harm #civilization, and says the future needs a humanity defense organization.
https://www.wired.com/story/ai-godfather-yoshua-bengio-humanity-defense/
Addendum 3
[Yoshua Bengio, 2023-06-24] FAQ on Catastrophic AI Risks
https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks
See also:
[Yoshua Bengio, 2023-05-22] How Rogue AIs may Arise
https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise
[2023-06-24, Yoshua Bengio] FAQ on Catastrophic AI Risks
https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/
https://mastodon.social/@persagen/110601069686173752
#YoshuaBengio #AIrisk #algorithms #risk #regulation #AI #AGI #superintelligence #ExistentialRisks #Bengio
I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us.