I dont need to "well actually" a good point, so I won't, but there is a continuum of "machine learning algorithms" that have a very fuzzy edge with traditional computer science topics.
In time, people are going to need to be more clear about where the line of acceptability is.
"No LLMs, but everything else is ok" may be an attempt at this answer.
What if im asking an LLM to help me learn topics better - getting info that I then verify for accuracy, benefiting from a different explanation?
That still uses power, water, and similar resources, which isn't great.
It also feeds into bad power structures by adding use.
It is different than generating art, though.
LLMs aside, there are other ML algorithms to talk about. VAEs, CNNs, are those ok?
How about kalman filters or bayesian logic?
Cellular automata?
Where's the line?
Do people feel like "just not LLMs" is the right answer?

@demofox "who needs data science when I can shovel more compute and data at it while remaining ignorant of the multidimensional corner case turds I'm shipping down everyone's throats"

It's the power structure, the lack of agency, the inability for the solution to handle details.

Use a texture and some polynomials, it will generally be faster and more accurate than a DNN and you won't waste your life drinking the planet destroying cool-aid.

@vethanis @demofox what is a texture in this context?

@lritter @demofox a shippable proven solution to interpolatable high bandwidth spatial data.

you can make a compute shader do hillclimbing to cook whatever you want in there.

DNNs are shitty textures that you have to evaluate every texel of with ALU.

a gigantic chain of lerps and saturates, but you don't get to control which endpoints are used.

we have block compression lerping between endpoints already, with much more control.

@lritter @demofox I'm sure there are some use cases for these damn things but not to the extent that people are trying to make.

The fundamental evil here is humans making sloppy decisions, and plain corruption.
Pressing the easy button instead of taking the time to do the work.
The naivety of thinking the details suddenly don't matter anymore, that it will all just work out on its own.
Grifting people and trying to be a growth stock money-printing scam.

That's what DNNs conjure in minds now.

@vethanis @demofox

i'm just a bit confused.

textures are for dense data, unless you mean a different kind of texture.

i have never heard of the term "hill climbing" before but i guess it's a thing; ironically though, for least squares opt., i imagined it as valley rolling (since we're minimizing).

i know what lerps and saturates are (though there is no clamping in naive DNN? i guess you mean ReLU), but i don't know what "endpoints" are.

@lritter @demofox you can warp your uv space to put more texels in interesting areas.

you can importance sample uvs with a gaussian inverse cdf to remove aliasing and further shrink it.

you can zoom in mentally on the problem and break it down into sets of terms and make a LUT for the relevant expressions, reducing the dimensions of the problem.

an endpoint here is a texel or one of the spots where the DNN's error is fairly low.

@lritter @demofox
imagine you trained a set of fully connected layers with relu activation to approximate some reference function that is expensive to compute.

then printed out the expressions the network is evaluating as hlsl and stuck it in a shader, with all the weights as immediates known at compile time for maximum optimization.

if you stare at it long enough you will start to see that it is interpolation; a lasagna of unlerps and lerps.

@lritter @demofox and as you try to optimize it to not suck ALU and GPR-wise, you will see that shrinking it reduces the number of endpoints its interpolating between.