five years from now, the term "artificial intelligence" will refer to an entirely different technology

we don't know what it is, but it will be something mostly unrelated to the stuff that has that name today

we say this with confidence because that has consistently been the case every few years since the term was coined in 1956

as soon as everyone understands the latest fad, it feels obvious that it has nothing to do with "intelligence", and we all stop calling it that

we sometimes feel like we're drawing an unreasonably hard line by saying "machine learning" (which is to say, the field that deals with statistical techniques done by computer) or "differentiable neural networks" or "large language models" or whatever other specific thing we actually mean

it often involves pushing back on our friends who find it obvious what "AI" refers to, and think being specific about it is just pointless obscurity

but like. it feels obvious today because we are all immersed in marketing material that pushes this one specific meaning

the moment the money dries up, the marketing will too, and when we all look back at stuff written during this period, it won't be at all clear what people were trying to say

we've been following this field since early childhood in the 1980s. yes, we were very avid readers of technical documentation at that age. yes, kids these days don't have the same opportunities to learn how this stuff actually works, and that makes us very sad.

anyway, that's by way of saying we've seen a lot of shifts in computing over the years, and we're speaking from experience about this

@ireneista I mostly remember the late 2010s ML stuff, because I was trying to get a job in stats (and suddenly every linear model must be 'AI'). The current set makes the previous feel like a fever dream

@ireneista Working in/for a Computer Science (university) department has been a useful thing for me for this nuanced view of 'AI'. While there''s an overall 'AI' supergroup in the department, it has a bunch of named sub-groups called various things (local names and abbreviations include CL, KR, ML, and Vision). When we routinely have to ask 'okay what *sort* of AI is that new professor associated with?', it sticks in one's mind.

(And they're not all one big happy resource-sharing family.)

@cks ah! yes absolutely!

@ireneista Most frustratingly nearly nothing gets done with it in the long run, people just loose interest, another AI winter, and eventually the whole hype restarts.

Eg: What did we do with https://en.wikipedia.org/wiki/SHRDLU …?

Siri/Alexa/etc can't even do that!

SHRDLU - Wikipedia

@ireneista eg: Imagine a further developed command line interface which is 'aware' of files, folders, applications… and can do simple tasks like moving, renaming, etc?

Never happened.

@DLC right, exactly!

well, we're going to build it at some point, we decided. we took a strong interest in that back in the 2000s, then lost interest because, well, all the Irenes who were alive at that time died, and we-today had to figure out how to live in the ruins of their life and so on... but anyway we've convinced ourselves that it should actually be built

@DLC it's not even, like, that big a thing to build with modern techniques, honestly

@ireneista We're working on an (actual) smart home design based on the idea of agents which write/read to a shared whiteboard?

Silly eg: The camera-agent dumps a link to a raw video stream on the whiteboard

A cat finder-agent looks at any video source & marks the presence of cats in the video

The cat locator-agent looks at any marked cat footage tries to place the cat in the space in or around the house & drops it's location on whiteboard

The cat Identifier tries to name the cat

etc…

@DLC yeah that's roughly how we'd break the task down, and we'd for sure be using specialized pieces for each part of that. the current technique of misusing generative models has, uh, no clear argument for how it would ever work.
@DLC we hesitate to speculate more about this particular task in public though, because we can see military applications :/
@ireneista aka why we can't have nice things

@DLC @ireneista this is so scarily close to how Palantir works ;)

in other words, you have converged onto an architecture that works

@DLC @ireneista WDYM "never happened"? Shell navigation and tool-use is a staple skill for any self-respecting model.
@DLC that's one of our favorites as well - we still have the paper copy of the thesis about it that we paid to have xeroxed back in the 2000s. (they finally got around to scanning it at some point recently, so it's free now)
@DLC and yeah it is extremely striking that we can't even ask the voice assistant what lightbulbs we have or what names it wants us to use for them
@DLC and yeah it is extremely striking that we can't even ask the voice assistant what lightbulbs we have or what names it wants us to use for them
@DLC and yeah it is extremely striking that we can't even ask the voice assistant what lightbulbs we have or what names it wants us to use for them
@DLC @ireneista IDK, I think Siri would be able to replicate this level of performance 
@ireneista been thinking it's probably time for another swing at opencyc with some llm sprinkles for about 2 years now
@rho that'd be neat, too. do loop us in if you do that, we'd love to hear how it goes.
@ireneista oh definitely not going to be us!! just had the thought
@rho ah, oh well :) thanks for sharing it, anyhow :)

@rho @ireneista

heh, i'm interested in knowing more; there's this humongous read about their history and where they floundered, i have to find the link -- perhaps you know which i'm talking about? it's that memorable ;)

meanwhile, there's some talk coming out of IBM about neuro-symbolics, which presumably is the next step -- maybe the discrete-representations movement is how the two fields will be merged

@ireneista
I think - as someone of about the same generation, who's followed the AI field for many years - it's worth remembering that every time we had an "AI winter" we also did get a whole set of increasingly powerful methods and algorithms out of it. And they continued to be useful and used, just without the "AI" label slapped onto them.

Robust high-dimensional fuzzy matching is powerful and useful, with or without the "AI" polish on top.

@ireneista I still really like videogames so to me "AI" conjures things like

  • pathfinding
  • boids
  • STRIPS planning (I love love love the "Three States and a Plan" paper about the enemy AI in the FPS game FEAR)
  • inverse kinematics I guess? this is a thing I should really learn about because the results often look amazing but I just never read anything about it
  • maybe a wee bit of Prolog code here and there
@ireneista obviously all the game theory solution space search things like alpha beta pruning too! ❤️
@0x2ba22e11 absolutely! at one time (the 1970s, we think?), alpha-beta pruning was proposed as a fully-general problem-solving algorithm, for all the same tasks that "AGI" is allegedly for today
@0x2ba22e11 which, like.... it does still feel like a piece of that, to us. arguably, tree search of various kinds does more to actually solve problems than spicy spellcheck does. unfortunately it turns out to only work well when the atomic operations of the problem domain are tightly bounded and designed in concert with the search strategy (there was a great "why we failed" paper about that, that we once read)
@0x2ba22e11 ah well. we would say "research continues", but it doesn't. generative language models are consuming essentially 100% of available funding. maybe after the crash we can all go back to quietly working on things that might actually get somewhere.
@ireneista oh wow that is wild, I did not know that. lol. I can see why one would be exuberant but uh does it generalise beyond two player zero sum games at all?
@0x2ba22e11 sigh well the generalization is MCTS, which is still in use, so it's not like it's wrong so much as it's only one piece of the problem
@0x2ba22e11 of course, for a problem to be treated as tree-search, all the possible actions must be enumerable.........
@0x2ba22e11 right, yes! absolutely!
@0x2ba22e11 to be honest we still have it in our backlog to someday understand why you need to rig a model for inverse kinematics separately from how you rig it for regular kinematics
@ireneista see I have so little idea that I didn't even know that this was a thing. I literally just know what the technique is called, that people use it to get the 3d models to put their feet on the ground when walking, and a vague hunch that you probably have to invert a matrix somewhere in the process.
@ireneista @0x2ba22e11 I suspect the tl;dr is that inverse needs to understand something that's closer to musculature rather than just bounds on bones-and-stretching?

@0x2ba22e11 @ireneista Yeah, there are areas of work that use "things that have been called AI" for a lot of tasks that are "providing 'intelligence' to agents we don't pretend are remotely general or necessarily even all that capable" and people who hang around that absolutely use "AI" to mean "has it ever been seriously called AI?" rather than "is it the latest shiny?"

Naughty Dog's PSX lisp comes to mind there too: it's firmly PL territory by a contemporary account but in historical terms it's partly "AI-oriented PL" in a way that lisps have had to spend a long time shedding the association over because it was subjecting lisps to AI winters.

@0x2ba22e11 @ireneista oh, ant and flocking algorithms are great examples of things I had covered in an intro to AI course in the early 00s that never got especially pretentious as opposed to potentially interesting for practical problems

@ireneista i like that you went w/ differentiable; i faintly remember that's what's needed for backprop to work?

don't remember if non-backprop methods also need differentiability

@lbruno yeah we're, like, not experts on this but yes, our understanding is, neural networks that aren't trained by differentiating them, uh..... don't work well. or at least, the historical ones didn't. differentiability was the innovation that made training work well, because it incorporates new information "all the way" instead of only a little bit.
@lbruno someone with a proper statistics background could explain that more formally, but we think the intuition is useful on its own
@ireneista have a handwriting pda from the late 80s marketed as "AI", a 90s financial laptop marketed as "AI".. etc.. etc.
@discatte hah, great examples. mind if we reshare?

@discatte @ireneista
Y'all just have to look at the korg M1 from the 80s

But there was no mechanical turk under the keys

@ireneista i do not know what "ai" five years from now will be made of, but "ai" ten years from now will be made of sticks and stones
@ireneista I remember my college advisor saying that "AI is anything we don't know how to do yet"; his graduate degree in the late '80s involved combinatorial search, which was one of the things called "AI" at that point
@jamey @ireneista That's also roughly how it was taught to me: Once we know how to do it we give it a name that isn't "AI" so it feels like "AI" research doesn't go anywhere even though computers can now do chess/igo/OCR/machine translation/speech recognition/image recognition/the list goes on.
@ireneista i have a game mod project with two mob behaviour scripting files with "ai" in their names and every time i'm reminded of that i keep thinking if i should rename them
@apophis in fairness to that naming convention, scripting game mobs is very much about producing the illusion of intelligence
@ireneista AI is called AI until everybody finally admits it’s just another fancy search algorithm.
@ireneista I wouldn't be surprised if it ends up legally defined in some contexts this goaround
@chris__martin gah lol that reminds us of a work thing we're supposed to be doing