27 Followers
80 Following
361 Posts

Amateur philosopher and dog person

I’m hopeful for more of:
* Legislation away from bad things. Like with smoking indoors, for example. Fast legislated withdrawal from fossil fuels. Legislation to limit and document money’s influence in politics.
* Universal basic income, applied globally
* International support for climate stabilisation and the protection of large areas of water and land environments

Longer posts and philosophical ramblings are at: https://github.com/aliclark/the_wooden_sword

Statement on Superintelligence

We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and
2. strong public buy-in.

https://superintelligence-statement.org/

#ASI #superintelligence

Statement on Superintelligence

“We call for a prohibition on the development of superintelligence, not lifted before there is (1) broad scientific consensus that it will be done safely and controllably, and (2) strong public buy-in.”

Statement on Superintelligence
I also still think The Assumption of Substrate-Independence is likely not correct, fwiw.

The combination of a) physical existence being a higher thermodynamic entropy state compared to a computer which is simulating that existence, b) an enclosing MWI quantum universe being a considerably bigger, fluctuating machine for generating conscious experience, leads me to think that the former will generate conscious experience at a much greater rate than the latter. So I think #SimulationTheory is not correct.

#SimulationHypothesis
#philosophy

OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down. By Palisade Research.

https://xcancel.com/PalisadeAI/status/1926084635903025621

https://palisaderesearch.github.io/shutdown_avoidance/2025-05-announcement.html

#agi

Could the USA and China enact a law like: every AGI org must commit 80% of its research and compute to AGI safety and security?

This is a quick thought and may be too naive for the real world, but I don’t see many alternative proposals.

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares

Stephen Fry: The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.

https://ifanyonebuildsit.com/

https://www.waterstones.com/book/if-anyone-builds-it-everyone-dies/eliezer-yudkowsky/nate-soares/9781847928924

#agi

If Anyone Builds It, Everyone Dies

The race to superhuman AI risks extinction, but it's not too late to change course.

If Anyone Builds It, Everyone Dies

Large Language Models Are More Persuasive Than Incentivized Human Persuaders by Schoenegger et al

https://arxiv.org/abs/2505.09662

#agi

Large Language Models Are More Persuasive Than Incentivized Human Persuaders

We directly compare the persuasion capabilities of a frontier large language model (LLM; Claude Sonnet 3.5) against incentivized human persuaders in an interactive, real-time conversational quiz setting. In this preregistered, large-scale incentivized experiment, participants (quiz takers) completed an online quiz where persuaders (either humans or LLMs) attempted to persuade quiz takers toward correct or incorrect answers. We find that LLM persuaders achieved significantly higher compliance with their directional persuasion attempts than incentivized human persuaders, demonstrating superior persuasive capabilities in both truthful (toward correct answers) and deceptive (toward incorrect answers) contexts. We also find that LLM persuaders significantly increased quiz takers' accuracy, leading to higher earnings, when steering quiz takers toward correct answers, and significantly decreased their accuracy, leading to lower earnings, when steering them toward incorrect answers. Overall, our findings suggest that AI's persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.

arXiv.org

Doctors treating the daily influx of malnourished children – #starving under Israel’s total blockade on aid – say some are so undernourished that they have started to lose their sight.

“The majority of cases are between one month and two years old,” says Dr Raed Al-Baba, a gastroenterologist and nutritionist at Al-Awda #Hospital in northern Gaza. He helps treat around 100 #children brought in daily, mostly because of #hunger.

#Famine #FoodAsWeapon
#Gaza #SaveGaza #StopIsrael #SanctionIsrael #BDS
#palestine #Israel #Politics #Genocide #PeaceNow #StopTheWar #CeasefireNow @palestine @israel

Why AI is our ultimate test and greatest invitation by Tristan Harris

https://www.ted.com/talks/tristan_harris_why_ai_is_our_ultimate_test_and_greatest_invitation

#agi

Tristan Harris: Why AI is our ultimate test and greatest invitation

TED
The rise of end times fascism

The governing ideology of the far right has become a monstrous, supremacist survivalism. Our task is to build a movement strong enough to stop them

The Guardian