I dont need to "well actually" a good point, so I won't, but there is a continuum of "machine learning algorithms" that have a very fuzzy edge with traditional computer science topics.
In time, people are going to need to be more clear about where the line of acceptability is.
"No LLMs, but everything else is ok" may be an attempt at this answer.
What if im asking an LLM to help me learn topics better - getting info that I then verify for accuracy, benefiting from a different explanation?
That still uses power, water, and similar resources, which isn't great.
It also feeds into bad power structures by adding use.
It is different than generating art, though.
LLMs aside, there are other ML algorithms to talk about. VAEs, CNNs, are those ok?
How about kalman filters or bayesian logic?
Cellular automata?
Where's the line?
Do people feel like "just not LLMs" is the right answer?

@demofox "who needs data science when I can shovel more compute and data at it while remaining ignorant of the multidimensional corner case turds I'm shipping down everyone's throats"

It's the power structure, the lack of agency, the inability for the solution to handle details.

Use a texture and some polynomials, it will generally be faster and more accurate than a DNN and you won't waste your life drinking the planet destroying cool-aid.

@vethanis @demofox what is a texture in this context?

@lritter @demofox a shippable proven solution to interpolatable high bandwidth spatial data.

you can make a compute shader do hillclimbing to cook whatever you want in there.

DNNs are shitty textures that you have to evaluate every texel of with ALU.

a gigantic chain of lerps and saturates, but you don't get to control which endpoints are used.

we have block compression lerping between endpoints already, with much more control.

@vethanis @demofox i understood some of these words