This, from @minaskar, is particularly lucid and insightful:

https://ergosphere.blog/posts/the-machines-are-fine/

The machines are fine. I'm worried about us.

On AI agents, grunt work, and the part of science that isn't replaceable.

@repepo @minaskar

Thanks for posting this article. Some things jumped out at me:

* Relying on a LLM to do one's thinking affects one's ability to think. An obvious historical analogy is memorizing multiplication tables vs. depending on a calculator.

* The failure modes described (e.g. confabulation & sycophancy) are growing pains of this immature technology. For example, we find that using a disciplined & role-separated multi-LLM array operating at the criticality point for context management eliminates confabulation.

* An LLM can hold more in context than can any human mind, allowing for manipulation of domains that would otherwise be impossible to engage with. I make no value judgement about this capacity, only note its existence.