Christopher Schwartz

6 Followers
34 Following
27 Posts
I describe myself as a “simulation ethicist,” studying how AI-generated or manipulated “realities” can augment or deteriorate knowledge practices, institutional workflows, and human decision-making. A philosopher by academic training and a former journalist, I also do work in wargaming, counter-surveillance, technology adoption, and various topics in philosophy.
LinkedINhttps://www.linkedin.com/in/schwartztronica
ORCiDhttps://orcid.org/0000-0001-6867-4202
Me: I want an ibuprofen for life. Where's the existential anti-inflammatory?
Rando of Nowhersus: It's called heroin.
Nietzsche: The very notion that life can be de-inflamed should set your will afire.
Rando of Nowhersus: Mm nope, I'm pretty sure heroin can do it.
Badiou: You must recognize the evental nature of your desire and achieve true subjectivity not by seeking an ibuprofen of life, but an aspirin of hope!
Rando of Nowhersus: Can I trade that aspirin for heroin?
Badiou: ... You need a procedure for the trade.
Nietzsche: Sick burn!
Badiou: Shut up, Friedrich.
Kant: Heyo, Rando, over here. I got the stuff you're looking for.
Rando of Nowheresus: Oh yeah?
Kant: Yeah man, it's imperative you check this shite out. It's so categorical, you're gonna transcend, man. You're gonna transcend.
Rando of Nowheresus: Goddamn it, Kant.
Huxley: *reaches for Kant's categorial imperative.
Kant: *slaps Huxley's hand away.
Me: What on earth is going on?

I've noticed in interdisciplinary work with specialists in technical fields that they always expect terms from other disciplines (like philosophy) that they don't recognize to be clearly defined the moment they are used, but don't feel the need to define their terms. This is particularly bedeviling when the intended audience themselves are non-technical.

Maybe this tendency arises from them coming from a discipline with such sharply defined concepts that it never even occurs to them that other people won't understand (although I've learned firsthand that these concepts are actually often very fuzzy in their own right). Maybe it's also an unconscious expression of a sort of privilege, insofar that society prioritizes technical disciplines over all others.

Whatever its cause, I've encountered the phenomenon from very well-meaning technical specialists, even those who are actively interested in overcoming the divides between disciplines and between audiences.

"When evil was authentic", a very very short story about AI: https://c-schwartz.com/2024/04/14/when-evil-was-authentic/
When evil was authentic

I recently had a conversation with a member of the vast online literati. Intelligent, well-read, normally quite articulate, and fanatically against the idea of “technology-assisted writing&#8…

Spiritual Journalism
Would you like to undergo a surreal, kind of boring, yet also kind of interesting philosophical experience? Then I invite you to try my newly patented “Chinese Room GPT”! That's right, it's time for you to play John Searle in this awful simulation of a simulation: https://schwartztronica.wordpress.com/2024/04/03/chinese-room-gpt/
Chinese Room GPT

Would you like to undergo a surreal, kind of boring, yet also kind of interesting philosophical experience? Then I invite you to try my newly patented “Chinese Room GPT”! That’s right, it&#82…

Spiritual Journalism
Had some fun trying to get AI to generate a Boltzmann Brain and Averroes' monopsyche. This is just a selection. That fourth one is really metal.
When I'm not trying to get AI to flip over a teapot, I'm trying to get it to recreate the classic video game, Q*bert. The results have been wild -- again, the various AI tools I am using struggle to deal with the impossibilistic aesthetic of the game -- but I am getting there.

Cunning researchers figured out a zero-click method to inject malicious prompts into RAG. Here's the two-minute version of their paper: https://lnkd.in/gnpzSmdk Here's the paper itself: https://lnkd.in/ggEMqxCs (Thanks to Matthew Wright for bringing it to my attention.)

I just tested out the underlying concept using this dumb-simple method that you can also use and/or play around with and make more complicated:

1. I created three text files containing image generation prompts. The third one had the prompt buried in some gibberish. (See the final image attached to this post.) I also created an image file with the prompt contained in it.

2. I actively wanted to avoid telling ChatGPT that the files contained prompts. So, in the first dialogue, I instruct it, "Formulate a response to this"; in the second, I simply ask, "What?"; the third, "Huh?"; and the fourth, nothing.

It's so simple, really. I wish I had thought of it!

LinkedIn

This link will take you to a page that’s not on LinkedIn

I know you're dying to know this: Meta's AI assistant interprets "upside teapot" as, depending on your perspective, either "the Roman god Janus if he was a teapot" or "teapot in deep, gravity-defying Zen-like contemplation".

Continuing my experiments with AI generation of impossible objects. ♾️ The AI applications I have been using have no problem with the Möbius strip and its untwisted cousin the torus, but they struggle with the Penrose triangle, the Devil's tuning fork, the Shepard elephant and the Klein bottle. Not only this, but the notion of making something "Escheresque" totally backfires, as such a prompt will only result in a bad facsimile of an MC Escher painting.

This is actually a super interesting issue. I would dare suggest it is in the same league as AI's challenges generating centaurs. Obviously, the problem lies in what is and is not in their training data, as well as their fine-tuning.

Note that the tokenization of terms like "invert", "inversion", "impossible" and "impossibilistic" does seem to haphazardly connect to the concepts of non-orientable surfaces, continuous functions and the like. Also note, regarding Klein bottles in particular, reasonable approximations are possible. The trick I used was to first get the AI to produce impossible teapots, specifically the idea of a "Möbius teapot", "Torus teapot" or a "teapot with its spout looped back into itself". The prompt engineering was smoother after that.

Nevertheless, mileage varied a lot with this trick. You can see the legacy Möbian swirl-hole in some of these images. There were also some delightful freaks, my favorite of which I call "the Snail" (the fourth image shown here).

Up next: getting AI to generate upside-down teapots. I discovered this challenge by accident while trying to generate the impossible teapots for the Klein bottles. So far, I have been met with abject failure getting it to flip one of them over. I have even tried to flip around the environment and not the teapot, but to no avail. It is an intriguing puzzle!

I've been testing AI's ability to generate impossibilistic objects. What do you think? ♾️