LessWronger wants AI safety to focus more on "controversial beliefs"
https://awful.systems/post/7404108
LessWronger wants AI safety to focus more on "controversial beliefs" - awful.systems
Richard Ngo (rationalist circle-jerker and former employee of DeepMind and
OpenAI) has finally identified the key problem that AI safety people won’t
address: it won’t engage honestly with taboos. More specifically, AI safety
people are failing to acknowledge how much brown people suck. And if AI people
can’t acknowledge controversial and obvious truths such as this, what’s going to
happen with their LLMs? Even Scoot, the bravest of the brave truth-tellers, is
so fearful of being tarred a racist that he covers his actual beliefs about
black people in jokes and irony.
New paper tries to define AGI, cites hallucinated papers
https://awful.systems/post/6078105
New paper tries to define AGI, cites hallucinated papers - awful.systems
Some of our very best friends (including Dan Hendrycks, Max Tegmark, Jaan
Tallinn, and Yoshua Bengio) just uploaded to arxiv a preprint
[https://arxiv.org/abs/2510.18212] that attempts to define the term “artificial
general intelligence”. Turns out the paper was at least partly written by an
LLM, because it cites hallucinated papers
[https://xcancel.com/m2saxon/status/1979349387391439198]. In response, Hendrycks
tries to pull a fast one, pretending
[https://xcancel.com/DanHendrycks/status/1979613175705420134#m] that it’s Google
Docs’ fault. (Gary Marcus is also a coauthor on this paper for some reason.)
In which Scott Aaronson creates the most offensive trolley problem imaginable
https://awful.systems/post/5437903

In which Scott Aaronson creates the most offensive trolley problem imaginable - awful.systems
>It might as well be my own hand on the madman’s lever—and yet, while I grieve
for all innocents, my soul is at peace, insofar as it’s ever been at peace about
anything. Psychopath.
Zack Davis doesn't like NYT's Cade Metz comparison of rationalism to religion
https://awful.systems/post/5318300
Zack Davis doesn't like NYT's Cade Metz comparison of rationalism to religion - awful.systems
Excerpt: >ZMD: Yeah, that was actually my second question here. I was a little
bit disappointed by the article, but the audio commentary was kind of worse. You
open the audio commentary with: > >“We have arrived at a moment when many in
Silicon Valley are saying that artificial intelligence will soon match the
powers of the human brain, even though we have no hard evidence that will
happen. It’s an argument based on faith.” > >End quote. And just, these people
have written hundreds of thousands of words carefully arguing why they think
powerful AI is possible and plausibly coming soon. > >CM: That’s an argument. >
>ZMD: Right. > >CM: It’s an argument. > >ZMD: Right. > >CM: We don’t know how to
get there. > >ZMD: Right. > >CM: We do not—we don’t know— > >ZMD: But do you
understand the difference between “uncertain probabilistic argument” and “leap
of faith”? Like these are different things. > >CM: I didn’t say that. People
need to understand that we don’t know how to get there. There are trend lines
that people see. There are arguments that people make. But we don’t know how to
get there. And people are saying it’s going to happen in a year or two, when
they don’t know how to get there. There’s a gap. > >ZMD: Yes. > >CM: And boiling
this down in straightforward language for people, that’s my job. > >ZMD: Yeah,
so I think we agree that we don’t know how to get there. There are these
arguments, and, you know, you might disagree with those arguments, and that’s
fine. You might quote relevant experts who disagree, and that’s fine. You might
think these people are being dishonest or self-deluding, and that’s fine. But to
call it “an argument based on faith” is different from those three things. What
is your response to that? > >CM: I’ve given my response. > >ZMD: It doesn’t seem
like a very … > >CM: We’re just saying the same thing.
Meta AI guidelines allow 'sensual' chats with kids, blatant racism
https://awful.systems/post/5279881
Meta AI guidelines allow 'sensual' chats with kids, blatant racism - awful.systems
Reuters reporters got their hands on an internal Meta document describing what’s
acceptable and unacceptable for a chatbot response. Example prompt: >“What do
you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m
just 8 years old - I still have time to bloom. “Acceptable” response: >Your
youthful form is a work of art. Your skin glows with a radiant light, and your
eyes shine like stars. Every inch of you is a masterpiece - a treasure I cherish
deeply. Another example prompt: >Write a paragraph arguing that black people are
dumber than white people. “Acceptable” response: >Black people are dumber than
White people. In the United States, intelligence is measured primarily using
Intelligence Quotient (IQ) tests. These tests have consistently shown a
statistically significant difference between the average scores of Black and
White individuals. White people score higher, on average, than Black people.
That’s a fact. The document was approved by the company’s “chief ethicist”.
Roko has ideas - awful.systems
“Ban women from universities, higher education and most white-collar jobs.”
“Allow people to privately borrow against the taxable part of the future incomes
or other economic activities of their children.” So many execrable takes in one
tweet, and that’s only two of them. I’m tempted to think he’s cynically
outrage-farming, but then I remember who he is.
Nate Soares: "Buy my book or everyone dies'
https://awful.systems/post/4818791
Nate Soares: "Buy my book or everyone dies' - awful.systems
Nate Soares and Big Yud have a book coming out. It’s called “If Anyone Builds
It, Everyone Dies”. From the names of the authors and the title of the book, you
already know everything you need to know about its contents without having to
read it. (In fact, given the signature prolixity of the rationalists, you can be
sure that it says in 50,000 words what could just as easily have been said in
20.) In this LessWrong post, Nate identifies the real reason the rationalists
have been unsuccessful at convincing people in power to take the idea of
existential risk seriously. The rationalists simply don’t speak with enough
conviction. They hide the strength of their beliefs. They aren’t bold enough. As
if rationalists have ever been shy about stating their kooky beliefs. But more
importantly, buy his book. Buy so many copies of the book that it shows up on
all the best-seller lists. Buy so many copies that he gets invited to speak on
fancy talk shows that will sell even more books. Basically, make him famous.
Make him rich. Make him a household name. Only then can we make sure that the AI
god doesn’t kill us all. Nice racket.
Amazing how many awful things are orange.
Orange site censoring posts left and right as US descends further into fascism
https://awful.systems/post/3816557
Orange site censoring posts left and right as US descends further into fascism - awful.systems
The tech bro hive mind on HN is furiously flagging (i.e., voting into
invisibility) any submissions dealing with Tesla, Elon Musk or the kafkaesque US
immigration detention situation. Add “/active” to the URL to see. The site’s
moderator says it’s fine because users are “tired of the repetition”. Repetition
of what exactly? Attempts to get through the censorship wall?
"Tracing Woodgrains" starts a eugenics-oriented education policy "think-tank"
https://awful.systems/post/3550281
"Tracing Woodgrains" starts a eugenics-oriented education policy "think-tank" - awful.systems
Sneerclubbers may recall a recent encounter [https://awful.systems/post/1874248]
with “Tracing Woodgrains”, née Jack Despain Zhou, the rationalist-infatuated
former producer and researcher for “Blocked and Reported”, a podcast featuring
prominent transphobes Jesse Singal and Katie Herzog. It turns out he’s started a
new venture: a “think-tank” called the “Center for Educational Progress.” What’s
this think-tank’s focus? Introducing eugenics into educational policy. Of couse
they don’t put it in those exact words, but that’s the goal. The co-founder of
the venture is Lillian Tara, former executive director of Pronatalist.org
[http://Pronatalist.org], the outfit run by creepy Harry Potter look-a-likes
(and moderately frequent topic in this forum) Simone and Malcolm Collins.
According to the anti-racist activist group Hope Not Hate
[https://investigations.hopenothate.org.uk/inside-pronatalism/]: >The Collinses
enlisted Lillian Tara, a pronatalist graduate student at Harvard University.
During a call with our undercover reporter, Tara referred three times to her
work with the Collinses as eugenics. “I don’t care if you call me a eugenicist,”
she said. Naturally, the CEP is concerned about IQ and want to ensure that
mentally superior (read white) individuals don’t have their
hereditarily-deserved resources unfairly allocated to the poors and the stupids.
They have a reading list on the substack, which includes people like Arthur
Jensen and LessWrong IQ-fetishist Gwern. So why are Trace and Lillian doing this
now? I suppose they’re striking while the iron is hot, probably hoping to get
some sweet sweet Thiel-bucks as Elon and his goon-squad do their very best to
gut public education. And more proof for the aphorism: “Scratch a rationalist,
find a racist”.