Using machine learning to model brains, proteins, materials: ok.
Using LLMs to produce summaries: fucking stupid
Using machine learning to model brains, proteins, materials: ok.
Using LLMs to produce summaries: fucking stupid
@Dialectician My sense is that, even when we apply reductionist methods, if we remain open, attentive and keep a lid on our hubris, we can get to valuable insights. And as long as, having done the work, we constantly remind ourselves that the map is not the territory.
But that’s hard, especially in a world saturated with transactional incentives.
On a related note, I listened to ‘How Life Remembers: From Metamorphosis to Simulation’ yesterday and found it fascinating:
https://helioxpodcast.substack.com/p/how-life-remembers-from-metamorphosis