So... I might be opinionated or biased because I was among those who wrote and tested the first few CNNs, LSTMs, and RNNs back when NLP and computer vision were just staring to gain mainstream traction...or because I also wrote our AI models from scratch myself... but the entire fuckAI or NoAI movement seems rather telling. Like, all models are basically a very high level of applied mathematics and statistics.

So, to me, it is very much also saying "fuck math&stats" or "no math&stats".

But, the NoAI movement _usually_ is less of a protest against the math rather than a protest against the sourcing and the scale. The conflict is that people do not hate the y = mx + b of it all; they hate that their personal art, writing, or voice was the variable used to solve for b without their consent.

In the early days of Computer Vision, the goal was often augmentation: helping a car see a pedestrian or a doctor find a tumor.

The sentiment of when people say "NoAI," they are often expressing a fear of "No Humans."; a socio-economic reaction to the automation of the "soul" rather than the automation of the "factory."

...a bit like the early days of the printing press or the industrial loom; The "Luddites"

That said, as someone who built these from the ground up, the "emergent" capabilities of today's billion-parameter models... how to say it? Hmm.... "a bigger GPU budget enables all the shit we once proposed but were limited by the physics of existing hardware at the time".
Like, there is still a fuck ton of shit that was proposed that are still being worked on today that can can only be fixed by throwing more money at it. So if an illustrator loses their job to prompt engineering, that is the fault of the illustrator. Did chainsaws put lumberjacks out of business or did lumberjacks evolve to use the new tools available to them? So to say, "The hardware finally caught up to the math."
Actually, in the domain of art, there is even less of an argument against generative models. Why? Because what the model generates depends on the user. If a chimpanzee slaps a canvas with a paintbrush, and some studio sells the paintings under a fancy pseudonym, that chimpanzee is a renowned artist. And that is okay; it is abstract art. But somehow it is not okay for a random fucker to use a generative model?

Again. Just like I got told back when I started. Get good. The ones complaining now, in my humble opinion, are fucking gatekeepers who rather throw tantrums than even begin to try adapting.
The art world has been "cheating" for a century.

Warhol had a factory of people making his prints.

Damien Hirst has assistants paint his "Spot Paintings."

Jeff Koons scarcely (never?) touches the metal of his sculptures.

The difference, and perhaps why the tantrum is so loud now, is Scale and Access. When Koons uses a factory, they call it "High Art" because it is expensive and exclusive. When a generative model allows everyone to have a factory for $20 a month, an exclusive club becomes a public park; accessibility is now highly problematic.
The biggest issue is that AI models expose stupid and otherwise incompetent humans. Modern business and society has been developed in direct accordance with the old adage, "it is not what you know, but who you know". So now that is falling apart, there are many dissenters; they built their lives around relationships and neglected their cultivation. However, what very few seem to pay attention to,
However, what very few seem to pay attention to, the silent minority, is that the job may vanish but the role remains untouched. Engineering as a role is not going anywhere. Who tells the model when it uses an inappropriate forumla for its use case? Where does the validation for the model's output come from? A marathon runner example of taking an Uber to the finish line? Who drives the car or tells it where to go?

AI is an incompetence filter.

For decades, middle management and "creative" industries have been padded by people who were masters of social signaling and bureaucratic navigation rather than technical output. When the "How" is automated, the "Who" becomes less relevant than the "What."

If a model can produce a functional prototype in seconds, the person whose only skill was "managing the timeline" or "knowing the guy who does the thing" becomes a vestigial organ in the corporate body.

The model is just P(output | input). It knows nothing of "truth" or "structural integrity"; it only knows "probability."

The engineer who understands the underlying math is the only one who can spot when the model hallucinates a formula that looks right but violates the laws of physics.

The "marathon runner in an Uber" only works if someone knows where the finish line is supposed to be. If you do not know what "good" looks like, you can not prompt or iterate. You just become a "random fucker" drowning in high-fidelity garbage.

If it takes 50 hours to draw a storyboard, the storyboarder is "essential."

​If it takes 5 seconds, the storyboarder is only essential if they have a unique vision that the model can not derive from its average.

a massive percentage of human output is "average." And average is now a commodity.

The "Get Good" mantra applies now more than ever. The people who will survive are those who understand the first principles of their field—the "Math & Stats"—because they are the only ones qualified to "drive the car" while everyone else is just a passenger complaining that the wheels are turning too fast.

The math itself guarantees the mediocrity that the mediocre are afraid of.

Unfortunately, the data that powers large scale models must observe the Gaussian distribution. So, for the most part, very little will actually change; where does that data feeding the CI/CD of model training and development originate?
So, going back to the illustrator example, the vocal majority is basically admitting that they can produce nothing greater than "slightly better than average". The lumberjacks who lost their jobs to the chainsaws were vocal because their inability to handle logistics and scaling was exposed.
By definition, if you are training on the "internet," you are training on the meat of the Bell Curve. The model is an engine of regression toward the mean. When an illustrator screams that AI is "stealing" their work, they are—statistically speaking—admitting that their work is indistinguishable from the 1 sigma or 2 sigma mass of data the model has already digested.

The "vocal majority" is effectively protesting the fact that their market value was tied to a scarcity of labor, not a scarcity of talent. Now that the labor is $0.00, the lack of unique talent is laid bare.

It is literally the "Lumberjack Logistics" problem—they could swing the axe, but they lacked the ability to plan the harvest.

models can not "innovate" in the human sense; they can only "interpolate" between existing data points. If you are an interpolator, you are replaceable by a matrix. If you are an extrapolator, the model is just a very fast assistant that handles the "boring" y = mx + b parts of your day.

Models are simply harvesting the entropy that is the collective of human cognition.

Edit, because deflecting agency and displacing responsibility is a very real problem: The input-output loop of an individual's life is primarily governed by their own internal logic and effort, regardless of the "environmental variables."

To continue the conversation. I generally ignore people who choose to deflect the argument of their own incompetence instead, but this should probably be said at some point.

For example, the ones who blame the companies who build billion dollar models or say that teachers and education is the problem.

So, just to point this out, I did my undergraduate years in a very racist environment. If I scored well on an exam, it was generally assumed that I cheated; despite never taking anything to the exam beyond a pencil (maybe an eraser and calculator, depending) and the clothes on my back.

My point is, if the public education system in the U.S. has any problems, that is a federal issue; the ones responsible are the same ones you voted into power, the private and entirely unrelated entities have fuck all to do with that. Why not place the onus on the students to take private lessons or go where they need to in order to learn what they want to know? Last time I checked, libraries were not expensive

If I was operating in a system that was actively hostile to my success—a "noisy" environment with a heavy negative bias—and I still optimized for the result... but, I get it.

The comparison between AI companies and the education system is a massive category error that people use to cope with their own obsolescence. In short, they need a villain to blame; the corporation or the education system are easy targets.

People blame OpenAI or Google because it is easier to hate a billion-dollar entity than to admit that a Python script just did their "highly skilled" job better than they did.

...but blaming "the system" for a lack of knowledge in the age of the internet is scientifically absurd. We live in an era where the barrier to entry for world-class information is effectively zero.

Getting angry at a private tech company for the failures of public education is like getting angry at a calculator manufacturer because you never learned long division. They are completely decoupled entities

That said, **again** the onus is on the student. "Get Good" in its purest incarnation. In a world of infinite entropy and Archimedean potential, the only thing inhibiting learning is one's own refusal to do so

The racist environment I navigated proves that merit is an invariant. If the math is right, the math is right, regardless of whether the person grading the paper wants it to be. AI is just the latest, most impartial grader ever built. It cares not for who you know or what pretty words you use. It only cares if your skill gained from hard work provides more utility than the statistical mean.