1 Followers
0 Following
0 Posts
The Rationality Trap - awful.systems

Via reddits sneerclub. Thanks u/aiworldism. I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult. https://www.aipanic.news/p/the-rationality-trap [https://www.aipanic.news/p/the-rationality-trap] the original link for the people who dont like archive.is [http://archive.is] used the archive because I dont like substack and want to discourage its use.

Move over TESCREAL, R9PRESENTATIONALism is here.

https://awful.systems/post/5308413

Move over TESCREAL, R9PRESENTATIONALism is here. - awful.systems

As found by @gerikson here [https://awful.systems/comment/8363036], more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post. Original link. [https://recapitulation.substack.com/p/the-great-idolatry-what-is-r9presentationalism] I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone’s amusement. Learn about your own true motives today. (This could be a troll of course).

Embryo selection, thinking about risk wrong : What we talk about when we talk about risk.

https://awful.systems/post/5244604

Embryo selection, thinking about risk the wrong way: What we talk about when we talk about risk. - awful.systems

Original title ‘What we talk about when we talk about risk’. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about [https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about]

From the comments of the LW article.

“I like and admire both Charles Stross and Greg Egan a lot but I think they both have “singularitarians” or “all of their biggest fans” or something like that in their Jungian Shadow.

I’m pretty sure they like money. Presumably they like that we buy their books? Implicitly you’d think that they like that we admire them. But explicitly they seem to look down on us as cretins as part of them being artists who bestow pearls on us… or something?”

Death of the Gorgon - Greg Egan

https://awful.systems/post/4328354

Death and the Gorgon - Greg Egan - awful.systems

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called ‘Death and the Gorgon’ by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit. [https://www.reddit.com/r/SneerClub/comments/12kpqsp/some_excerpts_from_zendegi_by_greg_egan_published/] I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real ‘steelman’ airtime to the subjects/subjects it is being negative about. But well, he doesn’t have to, he isn’t the guardian. Anyway, not going to spoil it, best to just give it a read. And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon [https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon] (Warning, spoilers for the story) (Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

Interview with a former Effective Vampire^M Altruist.

https://awful.systems/post/1201570

Interview with a former Effective Vampire^M Altruist. - awful.systems

Got the interview via (Dr. Émile P. Torres on twitter)[https://twitter.com/xriskology/status/1769902604501340627 [https://twitter.com/xriskology/status/1769902604501340627]] Somebody else sneered [https://twitter.com/meehawl/status/1769926850502185030]: 'Makings of some fantastic sitcom skits here. “No, I can’t wash the skidmarks out of my knickers, love. I’m too busy getting some incredibly high EV worrying done about the Basilisk. Can’t you wash them?”

More proof AI is not selfaware/intelligent, ASCII Art.

https://awful.systems/post/1160561

More proof AI is not selfaware/intelligent, ASCII Art. - awful.systems

Jailbreaking LLMs with ASCII Art. Turns out LLMs are still computer programs and sanitizing inputs is hard. NSFW as it isn’t a bad take by techpeople, but research showing that the AI creating diamondoid virusses because we were mean to it fears are overblown. It cannot follow simple (for us intelligent humans) instructions not to do certain things. LLMs are extremely good at parsing things however.

The Seven Deadly Sins of Predicting the Future of AI - 2017 post by Rodney Brooks on AI hype

https://awful.systems/post/816985

The Seven Deadly Sins of Predicting the Future of AI - 2017 post by Rodney Brooks on AI hype - awful.systems

Via this interview with Rodney (A GPT skeptic according to GPT) Brooks Just Calm Down About GPT-4 Already [https://spectrum.ieee.org/gpt-4-calm-down] I found this old (at least in internet terms, it is pre plague, during the first Trump era) blog post which some of you might find interesting to read. It points out a few flaws with AI hype reasoning which has relevance for our sneering on the LW AGI hype.

Eliezer complements Musk, Musk negs Eliezer

https://awful.systems/post/377601

Eliezer complements Musk, Musk negs Eliezer - awful.systems

Some light sneerclub content in these dark times. Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes [https://twitter.com/CommunityNotes] )). In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR. Eliezer invents HPMOR wireheads in reaction to this.