Sneerquence classics: Eliezer on GOFAI (half serious half sneering effort post) - awful.systems
I found a neat essay discussing the history of Doug Lenat, Eurisko, and cyc here
[https://yuxi-liu-wired.github.io/essays/posts/cyc/]. The essay is pretty cool,
Doug Lenat made one of the largest and most systematic efforts to make Good Old
Fashioned Symbolic AI reach AGI through sheer volume and detail of expert system
entries. It didn’t work (obviously), but what’s interesting (especially in
contrast to LLMs), is that Doug made his business, Cycorp actually profitable
and actually produce useful products in the form of custom built expert systems
to various customers over the decades with a steady level of employees and
effort spent (as opposed to LLM companies sucking up massive VC capital to
generate crappy products that will probably go bust). This sparked memories of
lesswrong discussion of Eurisko… which leads to some choice sneerable classic
lines. In a sequence classic, Eliezer discusses Eurisko
[https://www.lesswrong.com/posts/rJLviHqJMTy8WQkow/recursion-magic]. Having read
an essay explaining Eurisko more clearly, a lot of Eliezer’s discussion seems a
lot emptier now. > To the best of my inexhaustive knowledge, EURISKO may still
be the most sophisticated self-improving AI ever built - in the 1980s, by
Douglas Lenat before he started wasting his life on Cyc. EURISKO was applied in
domains ranging from the Traveller war game (EURISKO became champion without
having ever before fought a human) to VLSI circuit design. This line is classic
Eliezer dunning-kruger arrogance. The lesson from Cyc were used in useful expert
systems and effort building the expert systems was used to continue to advance
Cyc, so I would call Doug really successful actually, much more successful than
many AGI efforts (including Eliezer’s). And it didn’t depend on endless VC
funding or hype cycles. > EURISKO used “heuristics” to, for example, design
potential space fleets. It also had heuristics for suggesting new heuristics,
and metaheuristics could apply to any heuristic, including metaheuristics. E.g.
EURISKO started with the heuristic “investigate extreme cases” but moved on to
“investigate cases close to extremes”. The heuristics were written in RLL, which
stands for Representation Language Language. According to Lenat, it was figuring
out how to represent the heuristics in such fashion that they could usefully
modify themselves without always just breaking, that consumed most of the
conceptual effort in creating EURISKO. … > EURISKO lacked what I called
“insight” - that is, the type of abstract knowledge that lets humans fly through
the search space. And so its recursive access to its own heuristics proved to be
for nought. > Unless, y’know, you’re counting becoming world champion at
Traveller without ever previously playing a human, as some sort of
accomplishment. Eliezer simultaneously mocks Doug’s big achievements but
exaggerates this one. The detailed essay I linked at the beginning actually
explains this properly. Traveller’s rules inadvertently encouraged a narrow
degenerate (in the mathematical sense) strategy. The second place person
actually found the same broken strategy Doug (using Eurisko) did, Doug just did
it slightly better because he had gamed it out more and included a few ship
designs that countered the opponent doing the same broken strategy. It was a
nice feat of a human leveraging a computer to mathematically explore a game, it
wasn’t an AI independently exploring a game. Another lesswronger brings up
Eurisko here
[https://www.lesswrong.com/posts/t47TeAbBYxYgqDGQT/let-s-reimplement-eurisko].
Eliezer is of course worried: > This is a road that does not lead to Friendly
AI, only to AGI. I doubt this has anything to do with Lenat’s motives - but I’m
glad the source code isn’t published and I don’t think you’d be doing a service
to the human species by trying to reimplement it. And yes, Eliezer actually is
worried a 1970s dead end in AI might lead to FOOM and AGI doom. To a comment
here: > Are you really afraid that AI is so easy that it’s a very short distance
between “ooh, cool” and “oh, shit”? Eliezer responds: > Depends how cool. I
don’t know the space of self-modifying programs very well. Anything cooler than
anything that’s been tried before, even marginally cooler, has a noticeable
subjective probability of going to shit. I mean, if you kept on making it
marginally cooler and cooler, it’d go to “oh, shit” one day after a sequence of
“ooh, cools” and I don’t know how long that sequence is. Fearmongering back in
2008 even before he had given up and gone full doomer. And this reminds me,
Eliezer did not actually predict which paths lead to better AI. In 2008
[https://www.lesswrong.com/posts/juomoqiNzeAuq4JMm/logical-or-connectionist-ai]
he was pretty convinced Neural Networks were not a path to AGI. > Not to mention
that neural networks have also been “failing” (i.e., not yet succeeding) to
produce real AI for 30 years now. I don’t think this particular raw fact
licenses any conclusions in particular. But at least don’t tell me it’s still
the new revolutionary idea in AI. Apparently it took all the way until AlphaGo
(sometime 2015 to 2017) for Eliezer to start to realize he was wrong. (He never
made a major post about changing his mind, I had to reconstruct this process and
estimate this date from other lesswronger’s discussing
[https://www.lesswrong.com/posts/WyJKqCNiT7HJ6cHRB/when-did-eliezer-yudkowsky-change-his-mind-about-neural]
it and noticing small comments from him here and there.) Of course, even as late
as 2017, MIRI was still neglecting neural networks to focus on abstract
frameworks like “Highly Reliable Agent Design”
[https://www.lesswrong.com/posts/5bd75cc58225bf0670375321/on-motivations-for-miri-s-highly-reliable-agent-design-research].
So yeah. Puts things into context, doesn’t it. Bonus: One of Doug’s last papers
[https://arxiv.org/pdf/2308.04445], which lists out a lot of lessons LLMs could
take from cyc and expert systems. You might recognize the co-author, Gary
Marcus, from one of the LLM critical blogs: https://garymarcus.substack.com/
[https://garymarcus.substack.com/]