The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.

https://newsletter.mollywhite.net/p/effective-obfuscation

#ArtificialIntelligence #AI #AIEthics #EffectiveAltruism #EffectiveAccelerationism #newsletter #CitationNeeded

Effective obfuscation

Silicon Valley's "effective altruism" and "effective accelerationism" only give a thin philosophical veneer to the industry's same old impulses.

Citation Needed

Both effective altruism and effective accelerationism embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today.

#ArtificialIntelligence #AI #AIEthics #EffectiveAltruism #EffectiveAccelerationism #newsletter #CitationNeeded

Effective accelerationism has found an ally in Marc Andreessen, but his recent manifesto exposes that he just wants to go back to the old days when tech founders were uncritically revered, and when obstacles between him and staggering profits were nearly nonexistent.

#ArtificialIntelligence #AI #AIEthics #EffectiveAltruism #EffectiveAccelerationism #newsletter #CitationNeeded

@molly0xfff if only they looked up the definition of “effective” before pretending they know what they’re doing!

@molly0xfff
If these Tech Bros are right, and AGI really is around the corner, then that's all the more reason to be far more critical of them well before they develop it. Even if we can solve the safe alignment problem, AGI requires it's creators to make a lot of serious decisions about what it's going to be able to do and what sort of goals is going to have, and right now, these are not the people we want making those decisions.

Real AI safety isn't about consolidating control of the direction and use of the technology in the hands of a few rich people, it's about minimizing the harm those people can do.

@molly0xfff I need you, and am superglad to have found. I'm an old fuddy without any real grasp of what happened to my computer once it got connected to the rest of 'em. My dumb ass still thinks in zeroes and ones. Help me, Obi Wan Molly!

and fuck hope!

@molly0xfff I just read this, thanks for all the thoughtful unravelling of what is largely PR prose by people who are great at PR prose.
@panmanphil @molly0xfff and how many of the PR staff understand that with this generative ML (AI) they will be the first to go!
@molly0xfff Thar was a brillant f*cking post and the writer should be given a small principality over which to rule.
@molly0xfff I very much enjoyed this issue of your newsletter for its critical eye. Reminds me why I subscribe to you and Paris Marx. Thank you for all your work!
@molly0xfff Hear, hear! There's something about having a billion dollars that seems to make a person irredeemably narcissistic and gives rise to these sorts of pseudo-philosophical, self-serving ideologies.
@molly0xfff This speaks out of my soul, thank you.

@molly0xfff I am not sure they long for the g’old days though; I think the barriers are still virtually nonexistent.

But I do think that a lot of EA-adjacent crowd discards criticism because they think people criticize them because they don’t understand, because they are not as smart as them.

@molly0xfff Similar to some of my tech friends being shocked, shocked that when there was an economic slowdown, there were layoffs and salary cuts. Because that previously affected “society”, but they always thought of society as “other people”. I think this is a very similar phenomena.

@molly0xfff Why is everyone so eager to drive a wedge between the AI ethics folks and the AI safety folks?

The AI systems we have right now are not aligned with humanity. They have not gotten more ethical as capabilities have expanded. Extrapolate that to the future and you see a convergence. AI doesn't need to be conscious or superhuman to devastate humanity, just given the power.

Who benefits from this conflict? The people who meet quarterly goals by empowering AI at the expense of others.

@trystimuli @molly0xfff: Maybe because the EA/rationalist crowd are causing actual problems for those who are looking at legitimate, realistic near-term problems with AI by focusing on their science fiction pipe dreams.
@raktheundead @molly0xfff What problems are these? The closest I've heard is competition for resources, which, you know, seems like a call for cooperation on common goals, not dismissing each other's reasons for those goals.

@trystimuli @molly0xfff: Bias baked into the models which are causing prejudice against people of certain backgrounds, including racial bias: https://www.nasdaq.com/articles/heres-why-a.i.-bias-caused-80-of-black-mortgage-applicants-to-be-denied

AI being used to supercharge surveillance capitalism: https://www.cigionline.org/articles/ungoverned-space/

And there's the question of where generative AI models get their training data, which is regularly vacuumed up from copyrighted content which the original authors/creators see none of the proceeds from, therefore further degrading rights of smaller creators.

@raktheundead @molly0xfff I think you have the AI safety people confused with the AI capabilities people.

Capabilities research is rife with problems. Problems that both AI safety people and AI ethics people want to prevent. That nicely demonstrates the conflict between capabilities and ethics/safety. But what problems does AI safety research cause AI ethics research?

@trystimuli @molly0xfff: The EA/rationalist crowd's insistence on wanking on about Roko's basilisk and paperclip machines and stupid science fiction concepts like that impedes reasonable, well thought-out discussion about the problems that AI is actually likely to cause, especially in the near future. It sucks up the oxygen in the room.

@raktheundead @molly0xfff Similar to how people talking about fusion power impedes reasonable discussion of solar energy policy? Or how worst case scenario modelling of climate change impedes work on the current effects of climate change?

But the point is that you don't have to agree with people on everything in order to work with them on common interests; you just need to (mostly) agree on those common interests.

@trystimuli @molly0xfff: Nuclear fusion advocates both generally aren't cultists and actually have *some* degree of practical success in illustrating what they're doing. The rationalist movement, on the other hand, was created by a charlatan with no computer science experience who is otherwise best known for a shitty Harry Potter fan fiction.

This is not a golden mean situation.

@trystimuli @molly0xfff
You seen to be operating under the strange misconception that AI systems have alignment apart from the use to which they are put.

@FeralRobots

@molly0xfff

Suppose a variety of systems are put to use screening loan applicants. These different systems will produce different results for the same applicants. The downstream effects of those different systems is what I would call alignment. Racial and gender bias in such systems would be an example of misalignment with humanity at large, and those are well documented in both AI and human systems.

@trystimuli @molly0xfff
But where does the alignment come from? AI certainly could be making its own rules, but it makes no sense to me to call that "alignment" - it's just performing as designed.
"Alignment" in that case is the decision to abdicate human-in-loop decision processes to an algorithm. Which is nothing new, we've been doing it for literally thousands of years, so if you might say e.g. 'The Code of Hamurabi aligned itself with human interests' then sure, AI can align itself.

@FeralRobots @molly0xfff A system is aligned with a population to the extent that its effects benefit that population. And yes, humans have been working on alignment problems (for systems made up of humans) for a very long time, though mostly not aligning with humanity so much as a favored subset.

I think the Code of Hamurabi did not dictate how to change the system so I wouldn't say it aligned itself (and not with humanity so much as Babylonia).

@FeralRobots @molly0xfff That said, I do think some existing AI systems do align themselves over time (e.g. spam filters) but most the alignment of most is determined during training and how its integrated into the larger systems, and doesn't really change over time.
@molly0xfff some of the most batshit people I’ve heard of this year
@molly0xfff I'm going to try juggling burning coconuts. I feel like the AIs won't be able to do that for a few more weeks.

@molly0xfff it isn't surprising that the latest "disruptive tech" is being driven by people who are basically intelligent crackpots. These "effective a*" movements are right in line with the mindset of influential pioneers in the personal computer industry since the start. For example the founders of IMSAI made all the employees it hired take Erhard Seminars Training and were est zealots and these new EA movements very much have the same vibe.

Maybe smart people who are Good at Tech might not all be good at everything actually and might even not be rational people that can be trusted to shape society shocking as that sounds!

@molly0xfff I will be quoting you on “I must at this point remind you that this is a man who built a web browser, not goddamn Beowulf.”
@molly0xfff
Love it. Thanks for sharing.
🙂​

@molly0xfff

This is great! The Silicon Valley pump and dump ponzi money machine requires a new shiny steaming pile of nonsense object every couple years to suck in investor dollars and cash out mountains of worthless stock before the inevitable collapse of the nonsense shiny object. Rinse and repeat. AI and effective whatever are the latest shiny object. Great job pulling the curtain back.

@molly0xfff Excellent article and I love listening to it narrated.

"effective altruism" just seems like the latest sheep's clothing for selfish libertarian ideals.

Though, it also evokes whispers of trickle down economics; "if I'm extremely wealthy then everyone benefits from my potential charity".

@molly0xfff We are quickly approaching the reality of William Gibson's Neuromancer. Having lived in SV till 2006, I can tell you it's only the obscenely wealthy that think this. I worked for someone who spent his time in his plane documenting the changes to the coastline and won a case against Barbara Streisand
@molly0xfff “No particular allegiance to the biological substrate” sounds to me like a straight-up call for genocide.

@molly0xfff

Those are yet other money collecting sects.

There is no ground to discuss anything with them nor with their all-justifying "longtermism" view.

@molly0xfff Great article!

"Who wouldn’t want to be effective in their altruism, after all?"

Precisely - it's these Orwellian phrases that are so deceptive. As the joke says, if you don't know the details, waterboarding at Guantánamo Bay sounds like a good time.

I feel these people are deadly dangerous to society because of their tremendous power and total lack of interest in the well-being of actual humans existing today.

There won't be a negotiated solution with these powerful psychopaths.

@molly0xfff It baffles me that the smart people quoted can have these stupid ideas.

"Peter Singer [argued] that a person should feel equally obligated to save a child halfway around the world" ignores everything about what it is to be human.

No. Your first responsibility is to your family and yourself, then neighbors and community, and tertiarily to strangers thousands of miles away.

As for the hatred of sustainability: assassination, for a first offense.

@TomSwirly @molly0xfff

Singer, famously, was amongst Jeffrey Epstein's last prominent defenders. The fact that he was also one of these guys' last prominent defenders is troubling.

@passenger @molly0xfff Are you. Fscking. Kidding me?

I mean, I've talked to these people, and I guess it shouldn't be a surprise. "The age of consent is arbitrary, and these girls should be happy to helping out rich people as sperm receptacles."

Still, you would think they would care a bit what non-evil people thought.

@TomSwirly @molly0xfff

I'm being a little uncharitable here; Singer has lots of opinions, many of which are bad, and only two of which were "Epstein is a great man" and "Effective altruism is a good movement."

He's also quietly backed away from his support for evolutionary psychology, for example. No longer backing a discredited pseudoscience is good, even if "no longer backing" implies "did back once in the past."

@TomSwirly @molly0xfff Singer is a wonderful little test of your moral reasoning

if it's something he would agree with, back to the drawing board
@molly0xfff What a sad existence it must be to come up with #EffectiveAltruism in the first place.

@molly0xfff I agree EA in general is used as an excuse for tech billionares to justify themselves in general. But that doesn't mean EA itself is really problematic.

I agree with your critisisims about EA, but most of them seems to be actually critisism of Longtermism and not about the whole EA. I think that the works of Peter Singer are actually useful and caused good. Unfortunately, it's abused by rich people and turned into pseudo-philosopy.

@molly0xfff I think the question whether EA is just an excuse for the rich to use or it is actually a philosophy with intentions to improve hapinness in the world does not have a simple answer. There are many people in EA who have sincere intentions but there are also many people who just want to exploit it to look more tech-bro friendly. We shouldn't just dismiss the whole thing as nonsense, but try to separate whats good from whats bad.

@molly0xfff I don't think this is a very widely held opinion. It's not super uncommon, but 80k Hours thinks that only a minority of people should try this. It's well understood in the community that many fields are more talent-constrained than money-constrained.

To add to that, 80k also recommends not working in a company that does harm for more money, which is advice that SBF blatantly ignored. I think this point makes sense from a utilitarian lense too.

https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

80,000 Hours thinks that only a small proportion of people should earn to give long term

Norman Borlaug didn't make millions, his research just saved millions of lives. One of the most common misconceptions that we've encountered about 80,000 Hours is that we're exclusively or predominantly focused on earning to give. This blog post is to say definitively that this is not the case. Moreover, the proportion of people for whom we think earning to give is the best option has gone down over time.

80,000 Hours
@molly0xfff I was gonna try to not say more after that, but I saw something that wasn't true at all. Some people do argue that we should prioritize the longterm over the short-term, but the argument has nothing to do with "better" children. The argument is that, although it's harder to predict the outcome of longterm plans, there will be a lot more people in the future than there are today. Most money in effective altruism still goes to global health interventions benefitting current people.

@molly0xfff I wanted to bring up two other, more minor points:

Will MacAskill's opinion on sweatshops is that, they are terrible, but forcing the people working in them to be unemployed would probably be even worse. So he wants the focus to be on economic development so people no longer need to work there.

I don't often see effective altruists talking about sentient AI, but I guess that depends on your definition of sentient. If it means "conscious", then definitely not.

@molly0xfff My freshman year of college, I took an intro to engineering class that spent 2-3 days talking about different ethical frameworks.

Everyone that took this class hated it, partly because nothing was taught well, including this ethics portion. Afterwards, I *jokingly* said that the only thing I learned about ethics was "you can justify anything with a utilitarian mindset."

I feel like the adherents of these "philosophies" unironically believe that and see no problem with it.

@molly0xfff

A beautifully written polemic, the conclusion of which is spot on.

@molly0xfff “Let’s try something else.” Best advice I’ve heard in months.

@molly0xfff SV really does have a God complex. Tjey really do believe they’re going to save humanity.

We’ve given them far too much power, credit, and influence.

@molly0xfff this is a really good, well-thought-out essay.

@molly0xfff STARS ARE ALIGNED!

Molly is AGREEING with GEORGEHOTZ who railed against effective altruists and Helen Toner just yesterday... Because safety!

@molly0xfff When I learned about these concepts, they only reaffirmed my convictions that the TechBro billionaire class must be toppled as soon as possible.