@siracusa I was really disappointed to hear you say, shortly before 1hr40m in the latest released episode of ATP, that LLMs are “really good at understanding” which is an enormous category error. They don’t *understand* anything! What they’re good at is *using statistics to predict what text could plausibly come next*.
@siracusa You cannot actually trust anything coming out of an LLM. There are AI systems that do have actual understanding, like Cyc and other systems built on huge ontologies in knowledge representation systems, and they can do some pretty amazing things. For an example, look into Cyc’s participation in the “Battlefield of the Future” wargames a couple decades back to see how it compared to other battlefield decision support systems.