0 Followers
0 Following
4 Posts
ouatu.ro
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
It's at least Meta-relevant. Compression Represents Intelligence Linearly (Y Huang, 2024)
I wouldn't single out the concern new diseases if the population is small. Most diseases co-evolve intra-population. The lethal ones are the ones that suffer a mutation and are suddenly able to be passed to a different 'species'. So, if they already survived on a 'knife's edge', immune variety is of comparatively low concern (but still existential) on the list of things that can end your species (climate change, competition, demographics - 2-3 infertile females in a group of 20, say bye bye to tribe).
Instead of anti-fragility, I'd point you to the law of requisite variety instead.
You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.

Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.

Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.