David Manheim

285 Followers
209 Following
231 Posts
Humanity's long term future can be amazing - let's make sure it is.
Working at @alter_org_il, @TechnionLive
Previously, @asaferfuture, @1DaySooner, @superforecaster, @PardeeRAND
Twitterhttps://twitter.com/davidmanheim
Google Scholarhttps://scholar.google.com/citations?user=6-M1ZIUAAAAJ&hl=en
Blueskydavidmanheim.bsky.social
AI isn't generally intelligent until it matches humans at the ability to claim that despite advances, current capabilities still aren't enough to count as general intelligence.

What lessons about AI risks can we learn from the history of previous arms races?

Here's
@mattreynolds's latest, trying to answer that question:
https://www.wired.com/story/the-making-of-the-atomic-bomb-artificial-intelligence/

What serious adverse events might a Malaria human challenge trial find?

OK, that's *really* not what I expected to see reported.

OK, Twitter's CAPTCHA based on spatial reasoning and recognition is pretty impressive.

cc: @lxrjl @maartengm @SturnioloSimone

This week (so far) in @metaculus AGI timelines: April 2039 -> September 2036.

https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/

When will the first general AI system be devised, tested, and publicly announced?

"Policymakers often state that algorithmic systems should be explainable -- but how are the explainability tools created by the ML research community actually used in policy?"

"Five policy uses of algorithmic explainability" by Matthew O'Shaughnessy https://arxiv.org/abs/2302.03080

Five policy uses of algorithmic transparency and explainability

The notion that algorithmic systems should be "transparent" and "explainable" is common in the many statements of consensus principles developed by governments, companies, and advocacy organizations. But what exactly do policy and legal actors want from these technical concepts, and how do their desiderata compare with the explainability techniques developed in the machine learning literature? In hopes of better connecting the policy and technical communities, we provide case studies illustrating five ways in which algorithmic transparency and explainability have been used in policy settings: specific requirements for explanations; in nonbinding guidelines for internal governance of algorithms; in regulations applicable to highly regulated settings; in guidelines meant to increase the utility of legal liability for algorithms; and broad requirements for model and data transparency. The case studies span a spectrum from precise requirements for specific types of explanations to nonspecific requirements focused on broader notions of transparency, illustrating the diverse needs, constraints, and capacities of various policy actors and contexts. Drawing on these case studies, we discuss promising ways in which transparency and explanation could be used in policy, as well as common factors limiting policymakers' use of algorithmic explainability. We conclude with recommendations for researchers and policymakers.

arXiv.org

"Policymakers often state that algorithmic systems should be explainable -- but how are the explainability tools created by the ML research community actually used in policy?"

"Five policy uses of algorithmic explainability" by Matthew O'Shaughnessy https://arxiv.org/abs/2302.03080

Five policy uses of algorithmic transparency and explainability

The notion that algorithmic systems should be "transparent" and "explainable" is common in the many statements of consensus principles developed by governments, companies, and advocacy organizations. But what exactly do policy and legal actors want from these technical concepts, and how do their desiderata compare with the explainability techniques developed in the machine learning literature? In hopes of better connecting the policy and technical communities, we provide case studies illustrating five ways in which algorithmic transparency and explainability have been used in policy settings: specific requirements for explanations; in nonbinding guidelines for internal governance of algorithms; in regulations applicable to highly regulated settings; in guidelines meant to increase the utility of legal liability for algorithms; and broad requirements for model and data transparency. The case studies span a spectrum from precise requirements for specific types of explanations to nonspecific requirements focused on broader notions of transparency, illustrating the diverse needs, constraints, and capacities of various policy actors and contexts. Drawing on these case studies, we discuss promising ways in which transparency and explanation could be used in policy, as well as common factors limiting policymakers' use of algorithmic explainability. We conclude with recommendations for researchers and policymakers.

arXiv.org
Unfortunately, most discussions of EA - by both proponents and critics - conflate all of these. Hopefully, this and the next couple posts I have planned are useful for identifying the actual disagreements.

New post about EA, trying to disentangle different claims that it makes.

EA is a philosophy, a set of priorities, a movement, and a community. This looks at the first, and suggests that there are distinct claims, some of which are widely agreed upon.

https://forum.effectivealtruism.org/posts/u4QDFsGNnoZbqbuZW/deconfusing-effective-altruism-the-philosophy

Deconfusing Effective Altruism: The Philosophy - EA Forum

In practice, the widely-endorsed goal that Effective Altruism has, of doing good, is entangled with a number of things which are less universally lauded - from the community to the implied philosophi…