Some Quick and Dirty Thoughts on "The empty brain" - awful.systems
This started as a summary of a random essay Robert Epstein (fuck, that’s an
unfortunate surname) cooked up back in 2016, and evolved into a diatribe about
how the AI bubble affects how we think of human cognition. This is probably a
bit outside awful’s wheelhouse, but hey, this is MoreWrite. The TL;DR The
general article concerns two major metaphors for human intelligence: * The
information processing (IP) metaphor, which views the brain as some form of
computer (implicitly a classical one, though you could probably cram a quantum
computer into that metaphor too) * The anti-representational metaphor, which
views the brain as a living organism, which constantly changes in response to
experiences and stimuli, and which contains jack shit in the way of any
computer-like components (memory, processors, algorithms, etcetera) Epstein’s
general view is, if the title didn’t tip you off, firmly on the anti-rep
metaphor’s side, dismissing IP as “not even slightly valid” and openly arguing
for dumping it straight into the dustbin of history. His main major piece of
evidence for this is a basic experiment, where he has a student draw two images
of dollar bills - one from memory, and one with a real dollar bill as reference
- and compare the two. Unsurprisingly, the image made with a reference blows the
image from memory out of the water every time, which Epstein uses to argue
against any notion of the image of a dollar bill (or anything else, for that
matter) being stored in one’s brain like data in a hard drive. Instead, he
argues that the student making the image had re-experienced seeing the bill when
drawing it from memory, with their ability to do so having come because their
brain had changed at the sight of many a dollar bill up to this point to enable
them to do it. Another piece of evidence he brings up is a 1995 paper from
Science [https://pubmed.ncbi.nlm.nih.gov/7725104/] by Michael McBeath regarding
baseballers catching fly balls. Where the IP metaphor reportedly suggests the
player roughly calculates the ball’s flight path with estimates of several
variables (“the force of the impact, the angle of the trajectory, that kind of
thing”), the anti-rep metaphor (given by McBeath) simply suggests the player
catches them by moving in a manner which keeps the ball, home plate and the
surroundings in a constant visual relationship with each other. The final piece
I could glean from this is a report in Scientific American
[https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/]
about the Human Brain Project (HBP), a $1.3 billion project launched by the EU
in 2013, made with the goal of simulating the entire human brain on a
supercomputer. Said project went on to become a “brain wreck” less than two
years in (and eight years before its 2023 deadline) - a “brain wreck” Epstein
implicitly blames on the whole thing being guided by the IP metaphor. Said
“brain wreck” is a good place to cap this section off - the essay is something I
recommend reading for yourself (even if I do feel its arguments aren’t
particularly strong), and its not really the main focus of this little
ramblefest. Anyways, onto my personal thoughts. Some Personal Thoughts
Personally, I suspect the AI bubble’s made the public a lot less receptive to
the IP metaphor these days, for a few reasons: 1) Articial Idiocy The entire
bubble was sold as a path to computers with human-like, if not godlike
intelligence - artificial thinkers smarter than the best human geniuses, art
generators better than the best human virtuosos, et cetera. Hell, the AIs at the
centre of this bubble are running on neural networks
[https://en.wikipedia.org/wiki/Neural_network_(machine_learning)], whose
functioning is based on our current understanding of What we instead got was
Google telling us to eat rocks and put glue in pizza
[https://www.bbc.co.uk/news/articles/cd11gzejgz4o], chatbots hallucinating
everything under the fucking sun, and art generators drowning the entire fucking
internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it
makes. And all whilst burning through truly unholy amounts of power and
receiving frankly embarrassing levels of hype in the process. (Quick sidenote:
Even a local model running on some rando’s GPU is a power-hog compared to what
its trying to imitate - digging around online indicates your brain uses only 20
watts of power [https://hypertextbook.com/facts/2001/JacquelineLing.shtml] to do
what it does.) With the parade of artificial stupidity the bubble’s given us, I
wouldn’t fault anyone for coming to believe the brain isn’t like a computer at
all. 2) Inhuman Learning Additionally, AI bros have repeatedly and incessantly
claimed that AIs are creative and that they learn like humans, usually in
response to complaints about the Biblical amounts of art stolen for AI datasets.
Said claims are, of course, flat-out bullshit - last I checked, human artists
only need a few references to actually produce something good and original,
whilst your average LLM will produce nothing but slop no matter how many
terabytes upon terabytes of data you throw at its dataset. This all arguably
falls under the “Artificial Idiocy” heading, but it felt necessary to point out
- these things lack the creativity or learning capabilities of humans, and I
wouldn’t blame anyone for taking that to mean that brains are uniquely unlike
computers. 3) Eau de Tech Asshole Given how much public resentment the AI bubble
has built towards the tech industry (which I covered in my previous post
[https://awful.systems/post/2031653]), my gut instinct’s telling me that the IP
metaphor is also starting to be viewed in a harsher, more “tech asshole-ish”
light - not just merely a reductive/incorrect view on human cognition, but as a
sign you put tech over human lives, or don’t see other people as human. Of
course, AI providing a general parade of the absolute worst scumbaggery we know
(with Mira Murati being an anti-artist scumbag
[https://nitter.poast.org/tsarnick/status/1803920566761722166] and Sam Altman
being a general creep
[https://www.theverge.com/2024/5/20/24161253/scarlett-johansson-openai-altman-legal-action]
as the biggest examples) is probably helping that fact, alongside all the active
attempts by AI bros to mimic real artists (exhibit A
[https://twitter.com/anukaakash/status/1806854002640081345], exhibit B
[https://twitter.com/GenelJumalon/status/1810815644331278576]).