Stray thought on #ontology . In rpg's famous essay he points out that a goal of #objectOriented design is compression. If class B and class C have slots in common, we can write the common slots in a shared superclass A and derive both class B and class C from class A to avoid duplicating ~ontology in both class B and class C.

But if I have two ontologies 𝛽 and 𝛾 it is interesting to check their overlap, but it would be obtuse to introduce a superclass 𝛼 with no real meaning for that duality.

@screwlisp
> it would be obtuse to introduce a superclass 𝛼 with no real meaning for that duality.

Now you've strayed into religious issues. 😆

@screwlisp
On a more serious note, sharing and avoiding duplication happens to be a form of compression, which is relevant because someone in recent decades made the claim that intelligence is "just" compression.

That seemed interesting as a claim of a *part* of intelligence, but ludicrous as a claim that that's *all* of intelligence (according to me).

In any case, LLMs are demonstrably as small as they are because they compress their massive input data, yet are still quite good at approximately reconstructing it, in carefully designed trials reported in the literature.

I haven't looked into what people think about LLMs having a latent ontology as part of their latent semantic space, but it seems obvious that they do have such a thing, if we dig in and look.

A zillion things along these lines are currently unknowns, but it's all interesting.