1/2 degree rise in global warming will triple area of Earth too hot for humans
https://www.sciencedaily.com/releases/2025/02/250204132128.htm

* extreme heat threatens human life
* >260k heat-rel. fatalities 2000+
* anthropogenic warming will lead to more freq. uncompensable threshold crossings
* tripling uncompensable land area for young adults if warming reaches 2°C above preindustrial levels

Mortality impacts most extreme heat events
https://www.nature.com/articles/s43017-024-00635-w

https://mastodon.social/@persagen/113921277760534975

#ClimateChange #GlobalWarming #ExistentialRisks

Half a degree further rise in global warming will triple area of Earth too hot for humans

New assessment warns area the size of the USA will become too hot during extreme heat events for even healthy young humans to maintain a safe body temperature if we hit 2 degrees Celsuis above preindustrial levels. For those aged over 60, the same 2 degree rise would see more than a third of the planet's land mass cross this critical 'overheating' threshold.

ScienceDaily

What the Air Force doesn’t want us to notice on election night
https://www.counterpunch.org/2024/11/05/what-the-air-force-doesnt-want-us-to-notice-on-election-night
reposted: https://www.nationofchange.org/2024/11/05/what-the-air-force-doesnt-want-us-to-notice-on-election-night

* while everyone’s attention on U.S. pres. election, U.S. Air Force will test-launch an Intercontinental Ballistic Missile w. dummy hydrogen bomb
* does this several times a year
* launches are always at night while Americans sleeping

#capitalism #militarism #MIC #MilitaryIndustrialComplex #NuclearWeapons #ExistentialRisks #ICBM #USAF

U.S. Keeps Pouring Money Into Nuclear Weapons
Pentagon carrying out $2 trillion/multiyear plan to build new nuclear-armed missiles, bombers, submarines

* increases risk of disastrous exchange of nuclear attacks
* great business for major weapons contractors

Doomsday Clock: https://en.wikipedia.org/wiki/Doomsday_Clock

#MIC #MilitaryIndustrialComplex #IndustrialComplexes #billionaires #corporations #militarism #GlobalSecurity #NuclearWeapons #capitalism #ExistentialRisks #DoomsdayClock

Doomsday Clock - Wikipedia

"Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says..."

https://time.com/6898967/ai-extinction-national-security-risks-report/

#ai #artificialintelligence #existentialrisks

Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

The U.S. government must move 'decisively' to avert an 'extinction-level threat' to humanity from AI, says a government-commissioned report.

Time
From Daily Weather Forecasts to the Doomsday Clock: With chances increasing for Nuclear War, Sentient AI Drone Wars, and Global Societal Entropy: Can we predict Humanity's survival to the year 3024? #ExistentialRisks #GlobalChallenges #FutureCrossroads https://t.ly/ctCR4
From Daily Weather Forecasts to the Doomsday Clock:

With chances increasing for Nuclear War, Sentient AI Drone Wars, and Global Societal Entropy: Can we predict Humanity's survival to the year 3024? #ExistentialRisks #GlobalChallenges #FutureCrossroads

SVGN.io Silicon Valley Global News

As I read the Weinersmiths' critique of space settlement, I kept thinking of the pointless #AI debates I keep getting dragged into. Arguments for space settlement that turn on #ExistentialRisks (like humanity being wiped out by comets, sunspots, nuclear armageddon or climate collapse) sound an awful lot like the arguments about #AISafety - the "risk" that the plausible sentence generator is on the verge of becoming conscious and turning us all into paperclips.

23/

Addendum 4

[2023-09-12] Tech giants at White House to discuss AI risks
https://www.npr.org/2023/09/12/1198885516/these-tech-giants-are-at-the-white-house-today-to-talk-about-the-risks-of-ai

* W.H. secured pledges f. 8 big tech companies to do more testing, reporting, research on risks posed by artificial intelligence
* Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, Stability

[2023-06-01] Experts issue a dire warning about AI and encourage limits be imposed
https://www.npr.org/2023/05/31/1179030677/experts-issue-a-dire-warning-about-ai-and-encourage-limits-be-imposed

#algorithms #risk #regulation #AI #AIrisks #LLM #GPT #ChatGPT #ExistentialRisks #BigTech

FAQ on Catastrophic AI Risks - Yoshua Bengio

I have been hearing many arguments from different people regarding catastrophic AI risks. I wanted to clarify these arguments, first for myself, because I would really like to be convinced that we need not worry. Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small probability of catastrophic outcomes of superdangerous AI seriously, because of the possibly large magnitude of the impact (4) more powerful AI systems can be catastrophically dangerous even if they do not surpass humans on every front and even if they have to go through humans to produce non-virtual actions, so long as they can manipulate or pay humans for tasks (5) catastrophic AI outcomes are part of a spectrum of harms and risks that should be mitigated with appropriate investments and oversight in order to protect human rights and humanity, including possibly using safe AI systems to help protect us.

Yoshua Bengio

Addendum 2

Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, ...
https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell
Discussion: https://news.ycombinator.com/item?id=31790269

[2023-05-02] Geoffrey Hinton tells us why he’s now scared of the tech he helped build
https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

#algorithms #risk #regulation #AI #AGI #superintelligence #ExistentialRisks #Bengio #Hinton

Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More — LessWrong

An actual debate about instrumental convergence, in a public space! Major respect to all involved, especially Yoshua Bengio for great facilitation. …

Addendum 1

Older but great introduction/overview intended for general readership:

[1/2] The AI Revolution: The Road to Superintelligence
Tim Urban, 2015-01-22
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

[2/2] The AI Revolution: Our Immortality or Extinction
Tim Urban, 2015-01-27
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

#WaitButWhy #TImUrban #algorithms #risk #regulation #AI #AGI #superintelligence #ExistentialRisks

The AI Revolution: The Road to Superintelligence

Part 1 of 2: "The Road to Superintelligence". Artificial Intelligence — the topic everyone in the world should be talking about.

Wait But Why