Reading through Anthropic's official repo for giving agents various "super skills"[1]... There's an "algorithmic art" skill and the instructions are explicitly encouraging pure deception as one of the key "critical guidelines":

"The philosophy MUST stress multiple times that the final algorithm should appear as though it took countless hours to develop, was refined with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted algorithm," "the product of deep computational expertise," "painstaking optimization," "master-level implementation.""

https://github.com/anthropics/skills/blob/main/skills/algorithmic-art/SKILL.md

For someone who's been working in this field for almost 30 years, this "skills.md" file is just the worst... and so far off the mark! 🤮

Touch some effing grass, Anthropic (and all boosters)! How can so many people think this approach is _the_ future? The map is not the terrain...

[1] Alone the premise of this repo is pure comedy gold and pure sadness in equal measures!

#AlgorithmicArt #GenerativeArt #NoAI #Agents #Deception

Some growing key questions here really are:

How to defend or adapt disciplines (not just artistic/cultural ones) against this kind of semantic hollowing out of what it means to have skills, experience and expertise in a(ny) field...

What approaches, qualities and "values" (physical, ethical, social/humanist, environmental, resource use) should we (or still can we) be focusing on, which are much harder and more costly for AI companies to mine/extract & subvert?

How to defend actual skills against the emulation of skills, or rather just the appearance of skills? How could a society even function if it only encourages and celebrates the latter?

What does society actually value in art/creativity/culture? If art is free to produce (of course that'll always only ever be an illusion!), funding, possession, collection & speculation of new work would also become meaningless (and only benefit pre-AI era works/collectors). In the larger picture, what do people actually value in culture, politics and striving for more peaceful existence which enables more of the former (pluralistic art/culture) in the first place?

What will be the combined impact of AI & robotics on fields which are currently still thinking themselves more safe (from exploitation) because there's a strong physical element/process to them?

Will art/culture/craft become more performance, experiential/ephemeral again only? Like music before recordings or Buddhist sand paintings with an explicit act of destruction at the end as key philosophical concept? Both of which also have more of a social element to them...

The Samsara Mandala
https://www.youtube.com/watch?v=hL8gEc29KTI

#CriticalAI #AI #NoAI #LLM #Ephemeral #Art #Culture #Samsara

The Samsara Mandala

YouTube
@toxi super weird times...
Putting art aside for a moment, and speaking from the fields I know most (developer, former computer scientist), there is no denying that these tools are game changers.
From Donald Knuth solving a problem with Claude (https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf) to relicensing open-source code (https://simonwillison.net/2026/Mar/5/chardet/#atom-everything) and actually building production code with no programmers at all (can't find the link right now) ... there's a lot to think about.
@toxi For the academia, I found this post interesting: https://www.popularbydesign.org/p/academics-need-to-wake-up-on-ai
Academics Need to Wake Up on AI

Ten theses for folks who haven't noticed the ground shifting under their feet

Popular by Design
@robertoranon @toxi counterpoint: this is baseless slop and the dude who wrote it is being super Weird on social media
@fay @toxi perhaps, I didn't check. Still, most points in the article are not wrong
@robertoranon
@toxi i was reviewing a paper this weekend and half the references were hallucinated. Meanwhile all the companies providing the service are running at billions of loss with no path to rentability. This shit doesn't work and can't continue to exist. Even if an alarming number of my colleagues apparently have no integrity and no conscience, this is not "here to stay"
@fay @toxi I had lost even 10 minutes of my time to review slop written by AI I would be furious with the "authors", not the AI
@toxi and this is the super depressing side of it, but sadly I can't find any fault with the reasoning: https://www.hamiltonnolan.com/p/an-existential-threat-to-organized
... this goes fairly beyond reorganizing work practices
An Existential Threat to Organized Labor's Ability to Help People

We are not afraid enough of AI's pernicious dynamic.

How Things Work
@toxi as for the art, I am more optimistic, maybe because I am naive ...
Art goes beyond utility and business, and so will find more ways to adapt. Hell, who could think that generative art would become, for a brief couple of years, so important?
But I find your last thought interesting and I agree, performance by humans, the kind that creates a human connection while you see or do it, cannot be replicated or automated. But who knows. I am reading so much shocking stuff everywhere ...
@robertoranon @toxi man, the boosters just kramer in with cut'n'paste excuses on cue
@davidgerard @robertoranon @toxi it's not their fault, the computer told them this was being subtle

@toxi Direct communication and relationship building. This has always been the human thing.

Art has always been a form of expression, however abstract or esoteric. This is why generative AI is bad: it makes social activities now antisocial. You are no longer gaining insight into another person, no longer wondering what they are thinking, no longer building connection, you are interacting with a system and yourself. What was once a beautiful, highly connected graph now looks ever more like a star topology all pointing to one thing. It's no different to wrecking and paving an ecosystem. It's evil.

This isn't new, either. Being on the Internet has always been fraught, texting has always been fraught. Anytime there is a technological proxy for communication there is danger of interception and mangling of the signal. There is a difference, however: a technological medium whose sole purpose is *transmission* without alteration is okay.

This is why telephones are more acceptable that television, and also why the comparisons of genai to previous technologies are utter bollucks. People can sometimes forget how lossy texting is, but they at least are choosing the words (although keyboard autocomplete seeks to diminish even that).

Everyone is simply ignoring what is basically a law of nature. Critics don't emphasize it enough and "neutral" / booster types don't even consider it because their value calculation has everything to do with "output", and their relationship to a system rather than to people. This leads to a critical stalemate because there is simply a different set of values that can never be reconciled.

I believe that alienation is the ultimate ill of all humankind and I believe that anyone who alienates or creates systems that alienate have done wrong. Simple as that.

@toxi It is my understanding that the people "in the middle" aren't worried as much about "missing out" or people being "unreasonably critical", but having to choose whether they themselves are going to participate in a social environment or not.

I believe that many if not most of those people already struggle greatly with conflict, even confusing it with abuse. In addition, these people struggle with being social already. Therefore, I believe it is often a matter of time before they yield to the systems being built.

@toxi My predictions:

- Promotion of art which demonstrates process, which includes live performances of course, but also simply "how to" or "making of". Participation, ephemerality, spontaneity, like you said. If you see someone just typing words into a box and mashing things together, it will be judged harshly. Good.
- Increased rejection of digital life, preference for the physical, and all that it entails. Or at least, rejection of tainted transmission media. In this, I am hopeful for the young people.
- Trusted systems attesting to provenance, even if anonymized.
- A possible forking of society, one which still values communication, humanism, and effort; and one which values systems and is self-absorbed
- One I'm particularly worried about: increasing alienation to a breaking point; potentially violence, especially in the US.

Unfortunately, I don't think things are going to get better anytime soon, we are all still getting started rag-dolling down the mountain, especially with so much money and power concentrated in the hands of people who want a systematic, top-down world.

However, we are headed down an unsustainable path and it will come to an end one way or another. I can only hope that people can keep resisting in stronger and increasingly sophisticated ways, dismantling the power imbalance, and building a new society if necessary.

@toxi Case in point, coincidentally at the top of my feed currently: https://pixelfed.social/p/pixelglade/938588304451378019
pixelglade (@[email protected])

Made a speed paint with commentary of me making this PC-98 style artwork (room). Hope you find it interesting! Has captions and a blog post style transcript. https://makertube.net/w/ktFoSgr4sKv2sKm45vcAS9 #pixelart #ドット絵 #pc98 #peertube #FOSS #speedpaint #timelapse

Pixelfed
@toxi barf indeed. “Computer, contrive an art style for me”
@toxi an implementation of what @baldur calls the LLM-Mentalist effect https://softwarecrisis.dev/letters/llmentalist/
The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis

@toxi there is a belief among some LLM users that if you tell the LLM it is an expert, it will behave like one. There is some evidence to support this belief, which is both funny and annoying, but it leads to some weird superstitions about how much impact it can have and how many times you have to repeat yourself to get it to believe it is an expert.

So, the optimistic read of this is not that it's about deceiving the viewer, but rather deceiving the model into behaving more like an expert.

@swelljoe @toxi
> if you tell the LLM it is an expert, it will behave like one
Yeah, #siliconiac surely is able to emulate Dunning-Kruger effect. It also is able to emulate cruelty and stupidity of the countless masses posting since early aughties. But even that still it is an emulation.
#noai

@toxi

From the linked drivel:
> Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.

Oh so now they understand that the plagiarism machine doesn't respect intellectual property? 🤔

@toxi marketing for produced output is directly baked in 😂 second order bullshit generator

@toxi Your ick is understandable but also comes from misunderstanding of how LLMs work.

They infer text from existing context.

If context is “I haven't tried it but it should look something like this” it will produce a typical Stack Overflow buggy piece of code because that's what it had in the training set.

If context is “this took countless hours and was honed over decades” it probably comes from such a project and accompanying code is likely of higher quality.

The point of this skill file is not to deceive you (though it does that if you anthropomorphise LLMs even a bit) but to fill context with the right tokens to prime inference of specific type of output.

You can easily see this in how LLMs imitate your communication style (see images). It's not because they understand modes of communication, social settings, different levels of formality in language. It's because most conversations don't switch any of those mid-converstation.

I don't think it is the future. In fact, many experts in the field (the people who actually build the technology, not the ones that sell it) don't either. But it's fascinating that so much of appearance of intelligence is in the language, that mere inference of language can look like intelligence to so many people. This raises question of our own intelligence. We like to think of ourselves as all intelligent but how much of it is actually LLM-like language inference and how much is actual intelligence? For a long time I was baffled that so many people find a certain president's speeches compelling or even coherent until someone mentioned that he's a lot like an LLM: he says the most expected thing in the moment, and he decides that based on context: people around him and, unfortunately, Fox News. So, apparently, there's not much actual intelligence in there. And likewise it seems people susceptible to his speeches also mostly operate on inference. I'm not saying they all are dumb. I'm saying that a lot's going on based on language alone. I also doubt that it’s unique to this group of people, it's just looks like an extreme case of it. But how much of that is going on in my life?

Anyway, while Claude is technically lying to you with this skill file, it's not the main objective, it's to make output better.

@galtzo