#Yoshua #Bengio, one of the creators of #AI: "The danger of #Artificial #Intelligence is that it imitates us, and if it knows that we don't want to die, it may also want to stay on"
#Yoshua #Bengio, one of the creators of #AI: "The danger of #Artificial #Intelligence is that it imitates us, and if it knows that we don't want to die, it may also want to stay on"
Der Turing-Award-Gewinner Yoshua Bengio schließt sich dem Projekt Safeguarded AI an. Das setzt auf KI, um andere KIs zu kontrollieren. Doch diese Gatekeeper sind momentan noch rein hypothetisch.
#KI #Bengio #KünstlicheIntelligenz
Den ganzen Artikel findet ihr hier: https://t3n.de/news/nicht-in-den-abgrund-hineinfahren-so-will-ki-pionier-yoshua-bengio-ki-katastrophen-verhindern-1639755/
AI pioneer Yoshua Bengio joins UK's Safeguarded IT programme
Seeks to develop quantitative safety guarantees for AI
https://www.computing.co.uk/news/4344108/ai-pioneer-yoshua-bengio-joins-uks-safeguarded-programme
A ‘Godfather of AI’ Calls for an Organization to Defend Humanity
Yoshua #Bengio’s pioneering research helped bring about #ChatGPT and the current AI boom. Now he’s worried #AI could harm #civilization, and says the future needs a humanity defense organization.
https://www.wired.com/story/ai-godfather-yoshua-bengio-humanity-defense/
Addendum 3
[Yoshua Bengio, 2023-06-24] FAQ on Catastrophic AI Risks
https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks
See also:
[Yoshua Bengio, 2023-05-22] How Rogue AIs may Arise
https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise
[2023-06-24, Yoshua Bengio] FAQ on Catastrophic AI Risks
https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/
https://mastodon.social/@persagen/110601069686173752
#YoshuaBengio #AIrisk #algorithms #risk #regulation #AI #AGI #superintelligence #ExistentialRisks #Bengio
I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us.
Addendum 2
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, ...
https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell
Discussion: https://news.ycombinator.com/item?id=31790269
[2023-05-02] Geoffrey Hinton tells us why he’s now scared of the tech he helped build
https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
#algorithms #risk #regulation #AI #AGI #superintelligence #ExistentialRisks #Bengio #Hinton
cont'd
* extremely well-written, article is approachable to non-technical persons (if lost, focus on the generalizations of relevance and risks, which are also well-written)
* ~30 min to carefully read
* issues discussed are topical, & emerging technologies discussed will *profoundly* shape all our lives, now/future.
Yoshua Bengio: Wikipedia: https://en.wikipedia.org/wiki/Yoshua_Bengio
#Bengio #algorithms #risk #regulation #AI #LLM #GPT #ChatGPT #ExistentialRisks