Finally updated my #profile at https://speakerinnen.org  

Are you looking for a #talk, a #workshop or an #interview on the #socialimpact of #digitalization?

On #ai, #sustainability or #ideologies behind digitalization as #cybernetics, #gamification, #longtermism, or #effectivealtruism?

Why the thoughts and theories of those in power heavily influence societies?

Are you looking for a #workshop on how to really interest people in #computersecurity?

Contact me.

And #boost if you like

SpeakerinnenListe

Die Speakerinnen-Liste hat das Ziel, die Sichtbarkeit von Frauen bei Konferenzen, Panels, Talkshows und überall da zu erhöhen, wo öffentlich gesprochen wird.

Longtermismus – der „Geist“ des digitalen Kapitalismus

Der Vortrag wirft einen sozialwissenschaftlichen Blick auf die Ideologie des Longtermismus. Seine Funktion im digitalen Kapitalismus wird...

Thinking about the "Dust Speck vs Torture" fallacy a lot recently… "Is torturing one guy for 50 years better than a huge number of people getting a speck of dust in their eye" 🧵 #Utilitarianism #EffectiveAltruism #Longtermism #TESCREAL @xriskology.bsky.social
Bluesky

Bluesky Social

@Dnmrules
Yo dejé de ver los videos de #kurzgesagt cuando salieron con uno haciendo loas al altruismo efectivo. En el fondo son de la secta #TESCREAL

#effectiveAltruism #ea

New on the my blog: Breakthrough Incentive Markets - a third path for science funding that combines democratic problem prioritization with market-driven resource allocation.
https://open.substack.com/pub/danielvanzant/p/breakthrough-incentive-markets?r=3j968v&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Science today is trapped: academic funding favors incremental progress over radical ideas, while industry only invests where profits exist. Critical research falls through this gap.
What if we harnessed the same efficient mechanisms that drive innovation in the private sector to fund science that solves our most important problems?
Breakthrough Incentive Markets create outcome pools for specific scientific problems. Investors buy positions, fund promising research, and earn returns when breakthroughs happen.
The key innovation: Investors' returns are higher when solutions come sooner. This creates powerful incentives for speed, collaboration, and funding approaches too radical for grants but too public-good-oriented for industry.
Everyone does what they do best: The public identifies important problems through donations, markets efficiently allocate resources, and researchers focus on innovation instead of grant-writing.
Read the full proposal on my blog. I'd love some constructive criticism on this idea. #EffectiveAltruism #ScienceFunding #metascience
Breakthrough Incentive Markets

Aligning financial incentives with scientific progress to solve humanity's most urgent challenges

The more I hear about #EffectiveAltruism, #Rationalism, and the followers thereof, the luckier I feel that I didn't accept a Silicon Valley job when I was young.

That shit is beyond insane, and I think the person that I was when I was considering SV jobs would fall for it super hard, because I was a naive arrogant little shit.

I will be attending EAGxPrague conference in May.

I have been a big fan of https://80000hours.org for some time and given my background, I am interested in AI safety and also in "AI for good".

This is my first in-person involvement with the effective altruism community. I am well aware that there are some controversies around the movement, so I am quite curious about what I find when I finally meet the community in person.

#ai #aisafety #effectivealtruism

You have 80,000 hours in your career.

This makes it your best opportunity to have a positive impact on the world. If you’re fortunate enough to be able to use your career for good, but aren’t sure how, we can help

80,000 Hours

The two most recent episodes of #BioUnethical with David Thorstad, Emily Largent, and @GovindPersad were very (very!) good.

Hosts Leah Pierson and Sophie Gibert may be doing reflective discussion better than anyone — outstanding stage setting, questioning, improvising, etc.

https://www.biounethical.com

#bioethics #appliedEthics #Philosophy #medicine #health #biology #economics #law #psychology #epistemology #science #longtermerism #effectiveAltruism #xRisk #existentialRisk

Bio(un)ethical | Bioethics Podcast

Bio(un)ethical is a podcast miniseries where Leah Pierson and Sophie Gibert interview experts on bioethical issues, and question existing norms in medicine, science, and public health.

Bio(un)ethical

"A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.

Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.

Habryka used the word “crazy” in the non-clinical sense, to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.

“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.

Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”

A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me."

https://www.theguardian.com/global/ng-interactive/2025/mar/05/zizians-artificial-intelligence

#SiliconValley #Transhumanism #EffectiveAltruism #Rationalism #AI

They wanted to save us from a dark AI future. Then six people were killed

How a group of Silicon Valley math prodigies, AI researchers and internet burnouts descended into an alleged violent cult

The Guardian