https://alecmuffett.com/article/114986
#EndOfWorld #GenAi #ai #alignment #apocalypse #doomerism #llm
| homepage | http://inmind.org/ |
| home | London, UK |
Anecdote on how the public misunderstands the #LLM :
𝐐: Why cannot you just re-program the AI?
𝐌𝐞: Hmm?
𝐐: The AI makes mistakes -- but it's just a computer program. Why cannot you just open its code and fix the bugs?
𝐌𝐞:
"Ideas on Earth were badges of friendship or enmity. Their content did not matter.
...
Earthlings went on being friendly, when they should have been thinking instead. And even when they built computers to do some thinking for them, they designed them not so much for wisdom as for friendliness. So they were doomed."
Kilgore Trout on #LLMs providing a helpful looking answer above anything.
"Breakfast of Champions" #Vonnegut 1972
3/3 D. Dannett:
AI is filling the digital world with fake intentional systems, fake minds, fake people, that we are almost irresistibly drawn to treat as if they were real, as if they really had beliefs and desires. And ... we won't be able to take our attention away from them.
... [for] the current #AI #LLM .., like ChatGPT and GPT-4, their goal is truthiness, not truth.
#LLM are more like historical fiction writers than historians.
2/3 D. Dannett:
the most toxic meme today ... is the idea that truth doesn't matter, that truth is just relative, that there's no such thing as establishing the truth of anything. Your truth, my truth, we're all entitled to our own truths.
That's pernicious, it's attractive to many people, and it is used to exploit people in all sorts of nefarious ways.
The truth really does matter.
1/3 Great philosofer Daniel Dannett, before passing away, had a chance to share thoghts on AI which are still quite relevant:
1. The most toxic meme right now - is the idea that truth doesn't matter, that truth is just relative.
2. For the Large Language Models like GPT-4 -- their goal is truthiness, not truth. ... Technology in the position to ignore the truth and just feed us what makes sense to them.
https://bigthink.com/series/legends/philosophy-and-science/
#LLM #AI #truth #alignment
(Quotes in the following toots)
#LLM from 1964:
"The Three Stigmata of Palmer Eldritch" Philip K. Dick
“You insert one of the Great Books, for instance Moby Dick, into the reservoid. Then you set the controls for long or short. Then for funny version, or same-as-book or sad version. Then you set the style indicator as to which classic Great Artist you want the book animated like. Dali, Bacon, Picasso… the medium-priced Great Books animator is set up to render in cartoon form the styles of a dozen system-famous artists; ...”
HHMI Janelia Beautiful Biology
HHMI Janelia has launched Beautiful Biology, a… beautiful initiative that:
“(…) aims to cultivate interest and curiosity in the life sciences through this portal of stunning images showcasing the largely invisible biological world.”
From their About page.
This is very much aligned with the motivation behind the Ocean: hidden life photo exhibition and the Cifonauta database itself—so I’m excited (no surprises).
The website is gorgeous. It has different view modes, like the Visual spectacle, an endless screen saver with clickable images, and the Scroll and Explore, a series of (outstanding) visual lectures on major concepts of biology.
You can also explore the collection through standard text categories or visually (!), through cell structures, body parts, or organism types.
Each image has an informative and well-crafted description, and even annotations highlighting specific features, so Beautiful Biology also stands out as an extraordinary educational resource.
I’m happy that three of my images are featured there:
Explore: https://www.hhmi.org/beautifulbiology/
"Jailbreaking LLMs with ASCII Art" -- these early days of AI alignment remind hackers legends from 80-90x, when everything was exposed barren without proper security. Hack anything with a telephone and a paper clip!
2/2 That "More Agents Is All You Need" paper reminded me of always using "average / typical" as a representation baseline.
Such as the case when our #GNN experiment was performing as well as #MLP on neighbourhood average 🥲️ I believe someone already wrote a paper this, just missed to title it "average instead of GNN is all you need (sometimes)".