This, from @minaskar, is particularly lucid and insightful:
This, from @minaskar, is particularly lucid and insightful:
This is an amazing essay. Thank you so much for repping it.
@repepo I quite like the line:
> supervision is the physics
where you can replace physics with many others, such as "programming".
The difference between the two usage modes of accelerating the knowledge one already won versus replacing the effort needing to win the knowledge on the place, harks back to the centaur vs reverse centaur modes of operation described by @pluralistic
That final note, that the people who are already careful have the need to be even more careful, resonates too!
Good to see that the tradition of #cosmologists having a broad view of the Earth-bound world is thriving. Two of #DavidWHogg 's #ArXiv_2602_10181 [1] overtly stated keywords:
* "We beat ploughshares into swords" (p11)
* "Astrophysics represents a borderless world" (p12)
What responsibility comes in paradoxically doing both? (#Manicheism fails)
#MinasKaramanis
#NatalieBHogg
#RobertoTrotta #ArXiv_2602_10165 [2]

At time of writing, large language models (LLMs) are beginning to obtain the ability to design, execute, write up, and referee scientific projects on the data-science side of astrophysics. What implications does this have for our profession? In this white paper, I list - and argue for - a set of facts or "points of agreement" about what astrophysics is, or should be; these include considerations of novelty, people-centrism, trust, and (the lack of) clinical value. I then list and discuss every possible benefit that astrophysics can be seen as bringing to us, and to science, and to universities, and to the world; these include considerations of love, weaponry, and personal (and personnel) development. I conclude with a discussion of two possible (extreme and bad) policy recommendations related to the use of LLMs in astrophysics, dubbed "let-them-cook" and "ban-and-punish." I argue strongly against both of these; it is not going to be easy to develop or adopt good moderate policies.
@repepo I come back to the forklift analogy: if you're going to the gym to build strength, the lifting of weights is the work, and hiring a forklift to outsource the lifting of the weights leaves you with the results but none of the gains.
The models are cognitive forklifts, and if you cross that invisible line Hogg talks about when the model starts doing the lifting instead of just the placing of weights, you end up cognitively weaker for it.
@repepo great stuff! I loved this quote the most:
”yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.
Centuries of pedagogy, defeated by a chat window.”
@repepo “Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.”
This passage has been on loop in my head since reading this.
(I really need to read the Dune books…)
@repepo oh boy, yes, this:
> That instinct doesn't come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work. Making the models smarter doesn't solve the problem. It makes the problem harder to see.
so much of computing is pareidolia: we think the computer is being "smart" when it's really us doing the clever bit (and i mean the user, not the programmer). it's a useful magic trick, but. BUT.