Before Leviathan Wakes

저자는 고전적 자유주의와 보수주의 사이의 긴장 속에서 AI 규제에 대해 논의한다. 그는 대부분의 AI 규제에 반대하지만, 국가가 AI의 잠재적 대규모 재앙적 위험을 관리하는 역할은 인정한다. 특히 악의적 행위자가 AI를 이용해 사이버 공격이나 생물학 무기 개발 등 치명적 피해를 일으킬 가능성에 대비한 규제가 필요하다고 본다. 다만, 국가가 AI를 독점하여 통제하는 극단적 상황은 경계하며, 민간과 국가 사이에서 균형을 맞출 수 있는 중간 기관 설립을 지지한다. 이는 AI 안전과 혁신 사이의 지속 가능한 조율 방안으로 제시된다.

https://www.hyperdimensional.co/p/before-leviathan-wakes

#airegulation #catastrophicrisk #staterole #frontierai #governance

Before Leviathan Wakes

Why I Believe What I Believe

Hyperdimensional
There is no #LossModel for the #ClimateEmergency which is not based on 100% #CatastrophicRisk. #PriceToRisk #ReInsurance. @SabinCenter

Thinking about existential risks and optimism/pessimism...

(If you don't like contemplating The End of Everything ... turn away now.)

I was revisiting an old post of mine on how Steve Pinker's Panglossianism annoys me:

https://diaspora.glasswings.com/posts/d0b93200d8e40138d780002590d8e506

Past Me wrote something Present Me is nodding vigorously to:

"A global catastrophic risk by definition has not yet occurred and therefore of necessity exists in a latent state. Worse, it shares non-existence with an infinite universe of calamities, many or most of which can not or never will occur, and any accurate Cassandra has the burden of arguing why the risk she warns of is not among the unrealisable set."

That is, a moronically tedious response to raising questions of existential or major threats (e.g., collapse of civilisation) is that they've been often predicted but haven't occurred yet. (At least for Civillisation Present Main Branch.)

This ... seems to me strong shades of the #AnthropicPrinciple: if we were living in a timeline in which such an existential threat had occurred ... we wouldn't be having the conversation right now.

Moreover, presuming You Only Die Once (Ian Flemming / James Bond notwithstanding), then of the entire universe of existential threats, only one can in fact be realised.

To read this as suggesting that this mean that all other potential risks are then irrelevant ... seems to me a Category Error of Unusual Size. Put another way: with enough potential trials (say, habitable worlds on which technological civilisations do arise) one might suspect that there are in fact numerous ways in which those meet their end. It's just that our tools for information gathering and transmission are somewhat unequal to the task of actually recording that, at least at present. And quite possibly for all time.

But in a Gedankenexperiment presuming an Actuarial Department of All Civilisations In The Universe there might very well be at least some experienced distribution of Civilisation Ending Events which could be catalogued and for which actuarial risk might be tabulated. The nature of the problem is similar to the distinction between risks ascribable to a single individual vs. an entire population.

As an illustration say, your individual risk of dying in an automobile accident might be roughly comparable to that of dying in a mass-extinction asteroid impact --- the latter are less frequent but have far greater magnitude.

(Asteroids also likely pose a far more consistent risk to individual lives over the entire history of the Earth than automobiles do --- roughly 4.5 billion years to date for the first, and about a buck-twenty-five centuries for the second.)

But even that comparison fails to capture what I see as a salient distinction between car wrecks and meteor strikes: odds are very low that everyone on Earth is involved in a fatal car collision at once, but high that they might perish in the same Large Impactor Event. Simply focusing on individual actuarial risk utterly ignores this.

But back to Pinker, Panglossianism, and dismissing catastrophic risk on the basis that it's not yet occurred: the dismissal is directly and intrinsically related to the nature of the threat itself, and in its own way actually validates the nature and scope of such threats.

It's also utterly irrelevant in any meaningful sense of characterising statistical likelihood as the objection is effectively a class of sampling error and self-selection bias.

Anyhow, that's what's been troubling my little head for the past day or so. And I don't think I've seen this expressed by anyone that I'm aware of (though as usual, I suspect it's not an entirely novel realisation). If this does sound familiar, cites/references are strongly encouraged.

#ExistentialThreats #CatastrophicRisk #EndOfTheWorld #CategoryError

Steven Pinker's Panglossianism has long annoyed me

Steven Pinker's Panglossianism has long annoyed me A key to understanding why is in the nature of technical debt, complexity traps (Joseph Tainter) or progress traps (Ronald Wright), closely related to Robert K. Merton's notions of unintended consequences and manifesst vs. latent functions. You can consider any technology (or interventions) as having attributes along several dimensions. Two of those are impact (positive or negative) and realisation timescale (short or long). Positive Negative Short realisation Obviously good Obviously bad Long realisation Unobviously good Unobviously bad Technologies with obvious quickly-realised benefits are generally and correctly adopted, those with obvious quickly-realised harms rejected. But we'll also unwisely reject technologies whose benefits are not immediately or clearly articulable, and reject those whose harms are long-delayed or unapparent. And the pathological case is when short-term obvious advantage is paired with long-term ...

Glass Wings diaspora* social network

A weakened Facebook is only more dangerous: Facebook delenda est

A weak Facebook, or Google or Apple, or any other data monopolist all pose a risk regardless of whether you directly participate in them or not. You are in the data graph. Smug satisfaction at “they never had me as a user” utterly fails to ackowledge this point or risk. ...

You live in the world surveillance capitalism has created, and in the world in which its failing forms will continue to influence. This means that changes to the data regime are highly likely to have futher profound influences. ...

https://diaspora.glasswings.com/posts/309b8020676d013a2cf6448a5b29e257

#Facebook #Risk #CatastrophicRisk #TechnicalDebt #Monopoly #SurveillanceCapitalism #BigData #HerberSimon #Census #Holocaust #Censorship #Manipulation #Propaganda #Coercion #FacebookDelandaEst

A weakened Facebook is only more dangerous: Facebook delenda est

A weakened Facebook is only more dangerous: Facebook delenda est Facebook is an inherent irredeemable massive catastrophic data risk. A weakened or failing Facebook only multiplies that threat. One of my frequently-repeated Google+ posts asked what Google were doing to brownshirt-proof their vast troves of personal information. I've since created similar posts for Apple and Facebook: https://toot.cat/@dredmorbius/107046010705188693 A weak Facebook, or Google or Apple, or any other data monopolist (https://archive.is/3r9mH) all pose a risk regardless of whether you directly participate in them or not. You are in the data graph. Smug satisfaction at "they never had me as a user" utterly fails to ackowledge this point or risk. Monoply control over data fundamentally enables and results in surveillance, propaganda, censorship, disinformation, and targeted manipulation and coercion. And I'm one of those people whom Facebook never had as a user. My personal response to recruiters h...

Glass Wings diaspora* social network
Best path to net zero: Cut short-lived super-pollutants

Dramatically reducing now the amount of short-lived super-pollutants in the air—such as black carbon, methane, tropospheric ozone, and hydrofluorocarbons—could buy us enough time to deal with carbon dioxide emissions and avoid a “Hothouse Earth” later. And some of the legal and treaty mechanisms are already in place.

Bulletin of the Atomic Scientists