Devs and the Culture of Tech - Final part

https://awful.systems/post/2207149

Devs and the Culture of Tech - Final part - awful.systems

Hello all. People were very kind when I originally posted the start of this series. I’ve refrained from spamming you with every part but I thought I’d post to say the very final installment is done. I got a bit weird with it this time as I felt like I had an infinite amount to say, all of which only barely got to the underlying point i was trying to make. So much that I wrote I also cut, it’s ridiculous. Anyway now the series is done I’m going to move on to smaller discrete pieces as I work on my book about Tech Culture’s propensity to far-right politics. I’ll be dropping interesting stuff I find, examples of Right Libertarians saying ridiculous things, so follow along if that’s your jam.

Ah, hell yeah, the much-anticipated finale.

Gonna give particular praise to the opening, because this really caught my eye:

Tech culture often denigrates humans through its assumptions that human skills, knowledge and functions can be improved through their replacement by technological replacements, and through transhumanist narratives that rely on a framing of human consciousness as fundamentally computational.

I’ve touched on the framing of human consciousness part myself - seems we may be on the same wavelength.

As for the whole “replacement by technological replacements” part…well, we’ve all seen the AI art slop-nami, its crystal fucking clear what you’re referring to.

Some Quick and Dirty Thoughts on "The empty brain" - awful.systems

This started as a summary of a random essay Robert Epstein (fuck, that’s an unfortunate surname) cooked up back in 2016, and evolved into a diatribe about how the AI bubble affects how we think of human cognition. This is probably a bit outside awful’s wheelhouse, but hey, this is MoreWrite. The TL;DR The general article concerns two major metaphors for human intelligence: * The information processing (IP) metaphor, which views the brain as some form of computer (implicitly a classical one, though you could probably cram a quantum computer into that metaphor too) * The anti-representational metaphor, which views the brain as a living organism, which constantly changes in response to experiences and stimuli, and which contains jack shit in the way of any computer-like components (memory, processors, algorithms, etcetera) Epstein’s general view is, if the title didn’t tip you off, firmly on the anti-rep metaphor’s side, dismissing IP as “not even slightly valid” and openly arguing for dumping it straight into the dustbin of history. His main major piece of evidence for this is a basic experiment, where he has a student draw two images of dollar bills - one from memory, and one with a real dollar bill as reference - and compare the two. Unsurprisingly, the image made with a reference blows the image from memory out of the water every time, which Epstein uses to argue against any notion of the image of a dollar bill (or anything else, for that matter) being stored in one’s brain like data in a hard drive. Instead, he argues that the student making the image had re-experienced seeing the bill when drawing it from memory, with their ability to do so having come because their brain had changed at the sight of many a dollar bill up to this point to enable them to do it. Another piece of evidence he brings up is a 1995 paper from Science [https://pubmed.ncbi.nlm.nih.gov/7725104/] by Michael McBeath regarding baseballers catching fly balls. Where the IP metaphor reportedly suggests the player roughly calculates the ball’s flight path with estimates of several variables (“the force of the impact, the angle of the trajectory, that kind of thing”), the anti-rep metaphor (given by McBeath) simply suggests the player catches them by moving in a manner which keeps the ball, home plate and the surroundings in a constant visual relationship with each other. The final piece I could glean from this is a report in Scientific American [https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/] about the Human Brain Project (HBP), a $1.3 billion project launched by the EU in 2013, made with the goal of simulating the entire human brain on a supercomputer. Said project went on to become a “brain wreck” less than two years in (and eight years before its 2023 deadline) - a “brain wreck” Epstein implicitly blames on the whole thing being guided by the IP metaphor. Said “brain wreck” is a good place to cap this section off - the essay is something I recommend reading for yourself (even if I do feel its arguments aren’t particularly strong), and its not really the main focus of this little ramblefest. Anyways, onto my personal thoughts. Some Personal Thoughts Personally, I suspect the AI bubble’s made the public a lot less receptive to the IP metaphor these days, for a few reasons: 1) Articial Idiocy The entire bubble was sold as a path to computers with human-like, if not godlike intelligence - artificial thinkers smarter than the best human geniuses, art generators better than the best human virtuosos, et cetera. Hell, the AIs at the centre of this bubble are running on neural networks [https://en.wikipedia.org/wiki/Neural_network_(machine_learning)], whose functioning is based on our current understanding of What we instead got was Google telling us to eat rocks and put glue in pizza [https://www.bbc.co.uk/news/articles/cd11gzejgz4o], chatbots hallucinating everything under the fucking sun, and art generators drowning the entire fucking internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it makes. And all whilst burning through truly unholy amounts of power and receiving frankly embarrassing levels of hype in the process. (Quick sidenote: Even a local model running on some rando’s GPU is a power-hog compared to what its trying to imitate - digging around online indicates your brain uses only 20 watts of power [https://hypertextbook.com/facts/2001/JacquelineLing.shtml] to do what it does.) With the parade of artificial stupidity the bubble’s given us, I wouldn’t fault anyone for coming to believe the brain isn’t like a computer at all. 2) Inhuman Learning Additionally, AI bros have repeatedly and incessantly claimed that AIs are creative and that they learn like humans, usually in response to complaints about the Biblical amounts of art stolen for AI datasets. Said claims are, of course, flat-out bullshit - last I checked, human artists only need a few references to actually produce something good and original, whilst your average LLM will produce nothing but slop no matter how many terabytes upon terabytes of data you throw at its dataset. This all arguably falls under the “Artificial Idiocy” heading, but it felt necessary to point out - these things lack the creativity or learning capabilities of humans, and I wouldn’t blame anyone for taking that to mean that brains are uniquely unlike computers. 3) Eau de Tech Asshole Given how much public resentment the AI bubble has built towards the tech industry (which I covered in my previous post [https://awful.systems/post/2031653]), my gut instinct’s telling me that the IP metaphor is also starting to be viewed in a harsher, more “tech asshole-ish” light - not just merely a reductive/incorrect view on human cognition, but as a sign you put tech over human lives, or don’t see other people as human. Of course, AI providing a general parade of the absolute worst scumbaggery we know (with Mira Murati being an anti-artist scumbag [https://nitter.poast.org/tsarnick/status/1803920566761722166] and Sam Altman being a general creep [https://www.theverge.com/2024/5/20/24161253/scarlett-johansson-openai-altman-legal-action] as the biggest examples) is probably helping that fact, alongside all the active attempts by AI bros to mimic real artists (exhibit A [https://twitter.com/anukaakash/status/1806854002640081345], exhibit B [https://twitter.com/GenelJumalon/status/1810815644331278576]).

Funnily enough that was the bit I wrote last just before hitting post on Substack. A kind of “what am I actually trying to say here?” moment. Sometimes I have to switch off the academic bit of my brain and just let myself say what I think to get to clarity. Glad it hit home.

Thanks for the link. I’m going to read that piece and have a look though the ensuing discussion.

Forgot to say: yes AI generated slop is one key example, but often I’m also thinking of other tasks that are often presumed to be basic because humans can be trained to perform them with barely any conscious effort. Things like self-driving vehicles, production line work, call center work etc. Like the fact that full self drive requires supervision, often what happens with tech automation is that they create things that de-skill the role or perhaps speed it up, but still require humans in the middle to do things that are simple for us, but difficult to replicate computationally. Humans become the glue, slotted into all the points of friction and technical inadequacy, to keep the whole process running smoothly.

Unfortunately this usually leads to downward pressure on the wages of the humans and the expectation that they match the theoretical speed of the automation rather than recognise that the human is the the actual pace setter because without them the pace would be 0.

It seems to me like when you say “human minds are computational things” you can mean this in several ways that can be roughly categorized by what your ideas of “minds” and of “computational things” are.

You can use “computational things” to be an extremely expansive category, capable of containing vast complexity but potentially completely impractical for fully recreating on a drawing board. In this use, the word user would often agree with the statement but it wouldn’t belittle the phenomenon that is the human mind.

Or you can use “human minds” in a way that sees them as something relatively simple - kinda like a souped up 80486 computer, maybe. Nothing all too irreplaceable or special, in any case. Maybe an Athlon can be sentient and sapient! Most who say it like that would probably disagree with the sentiment because it small-mindedly minimizes people.

Then there’s the tech take version, which somehow does both: “Computation is everything and everything is computation, but also I have no appreciation for complexity nor a conceptualization of what all I don’t see about the human mind”. Within the huge canvas of what can be conceived of if you think in computation terms, they opt for tiny crayon scribbles.

Those "you"s were meant as general yous. ESL here, sorry.

Shorter: “Minds are computers” can imply views of (1) minds as simpler than they are, (2) computers as potentially very complex and general, or (3) both.

1 and 3 are not only wrong but also bad.

I guess (4), neither, is also thinkable, but internally quite contradictory.

Thanks for this series. I really enjoyed reading it (even though it reminded me that Yud’s Dust-Spec-vs.-Torture bullshit exists which I had successfully banished from my mind).

I remember watching Devs a few years back and I think you put everything I felt about that show into words much better than I ever could have.

I really should have done a full risk assessment before invoking the dust specks mind virus, my apologies.

Thanks for the kind feedback, I’m glad that my thoughts resonated with people. Sometimes I start these things and wonder if I’ve just analysed my way into a weird construct of my own creation.

I think one of my favourite parts was this footnote in part 4:

"Though the idea that politics is ingrained into material design choices is a growing consensus within recent Science and Technology studies work. I once wrote a piece on the role of software design in shaping people’s interpretation of texts - because that’s how I get wild on Friday nights.

(N.b. link preserved from original)

This is obviously a joke, but the truth in the joke is that you’re a huge nerd, and it makes me happy to see people like you on the internet. There was a while where I perceived the tech-people you critique as “my people”, but ultimately the euphoria of finding community dissolved into isolation and dread, as I grew to understand the impact of the cultish culture of tech.

the truth in the joke is that you’re a huge nerd

Oh absolutely. Yes I think partly my fascination with all of this is that I think I could quite easily have gone the tech bro hype train route. I’m naturally very good with getting into the weeds of tech and understanding how it works. I love systems (love factory, strategy and logistics games) love learning techy skills purely to see how it works etc. I taught myself to code just because the primary software for a particularly for of qualitative analysis annoyed me. I feel I am prime candidate for this whole world.

But at the same time I really dislike the impoverished viewpoint that comes with being only in that space. There’s just some things that don’t fit that mode of thought. I also don’t have ultimate faith in science and tech, probably because the social sciences captured me at an early age, but also because I have an annoying habit of never being comfortable with what I think, so I’m constantly reflecting and rethinking, which I don’t think gels well with the tech bro hype train. That’s why I embrace the moniker of “Luddite with an IDE”. Captures most of it!