five years from now, the term "artificial intelligence" will refer to an entirely different technology

we don't know what it is, but it will be something mostly unrelated to the stuff that has that name today

we say this with confidence because that has consistently been the case every few years since the term was coined in 1956

as soon as everyone understands the latest fad, it feels obvious that it has nothing to do with "intelligence", and we all stop calling it that

we sometimes feel like we're drawing an unreasonably hard line by saying "machine learning" (which is to say, the field that deals with statistical techniques done by computer) or "differentiable neural networks" or "large language models" or whatever other specific thing we actually mean

it often involves pushing back on our friends who find it obvious what "AI" refers to, and think being specific about it is just pointless obscurity

@ireneista i like that you went w/ differentiable; i faintly remember that's what's needed for backprop to work?

don't remember if non-backprop methods also need differentiability

@lbruno yeah we're, like, not experts on this but yes, our understanding is, neural networks that aren't trained by differentiating them, uh..... don't work well. or at least, the historical ones didn't. differentiability was the innovation that made training work well, because it incorporates new information "all the way" instead of only a little bit.
@lbruno someone with a proper statistics background could explain that more formally, but we think the intuition is useful on its own