I dont need to "well actually" a good point, so I won't, but there is a continuum of "machine learning algorithms" that have a very fuzzy edge with traditional computer science topics.
In time, people are going to need to be more clear about where the line of acceptability is.
"No LLMs, but everything else is ok" may be an attempt at this answer.
What if im asking an LLM to help me learn topics better - getting info that I then verify for accuracy, benefiting from a different explanation?
That still uses power, water, and similar resources, which isn't great.
It also feeds into bad power structures by adding use.
It is different than generating art, though.
LLMs aside, there are other ML algorithms to talk about. VAEs, CNNs, are those ok?
How about kalman filters or bayesian logic?
Cellular automata?
Where's the line?
Do people feel like "just not LLMs" is the right answer?
@demofox I don't know where the line is but O(2 T) parameters is definitely way the hell past it

@demofox less flippantly, I think I don't trust anyone peddling a generalist model. All of the forebears worth their salt are either solving a particular problem, or are techniques that are applied to a particular problem downstream.

I have no problem with the pre-existing toolbox of NLP tools, up to and including neural networks for things like named entity recognition.

But "this bit of statistics soup is a generalist model that does it all!" is just transparently untrue and I don't trust anyone selling it because they are either 1) dangerously stupid or 2) a knowing liar, and possibly both.

@demofox these days, I think often of von Neumann getting upset about someone having fit a polynomial with four parameters without a rational basis.

Sure, language is a big domain and should naturally justify a rather larger number of parameters than a plain old curve, but… I can't imagine he would approve of the soundness.