@siracusa I was really disappointed to hear you say, shortly before 1hr40m in the latest released episode of ATP, that LLMs are “really good at understanding” which is an enormous category error. They don’t *understand* anything! What they’re good at is *using statistics to predict what text could plausibly come next*.
@siracusa Thanks for covering that later in the episode, I turned it off and messaged you when I heard you ascribe understanding and only later later listened to the rest. But think about this: If you, so careful with language, make such statements, what chance do “normal” people have?
@siracusa Also, Robot Or Not topic: Should an ontology be represented by a hierarchy, a heterarchy (“multiple inheritance”), or a graph of relational tuples? In Robot or Not you seem to default to hierarchy. ;)