@faoluin @lokeloski we can either use it for writing code or writing tests, either way we're entirely responsible for what we contribute.
The junior-ish who was wildly excited is now regretting his decision to spend more time writing tests and at least originally he hoped to spend less time writing code. But the joy of being a junior is that you learn so much so fast!
It's the Gell-Mann amnesia effect all over again.
-----------
The Gell-Mann amnesia effect is a claimed cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

@AdrianRiskin @lokeloski It's one of the reasons that, when pointing out that I do not like Generative LLMs for the work they output, I do emphasize that it's not just *my* programming expertise that I feel this for.
Like, I feel the same way for books; if you wrote it with an LLM, and we can see because a prompt made it into the printed version, that tells me that you did not read what you claimed to have "Wrote" with an LLM - why should I read it then, when I know it can do the same thing it can do for math, or coding, or images?
@PhilWill @ratsnakegames @lokeloski
ah, I see it now: *this* is at the root of why mandated AI use is so corrosive. Someone up the heirarchy, not understanding the complexity of the work of their subordinates, declares they are replaceable by the machine.
Hmm. I need to think on this.
@gkrnours @lokeloski I think that's a niche effect, like considering all the OP to be "creators" and asking why they all think LLMs can do their jobs.
A C++ developer might think that LLM generated Python code is no worse than what they'd write, while a Python dev thinks the same about C++ code. They can both be right, because their cross-field abilities are low.
Terry Tao?
@lokeloski
https://bsky.app/profile/magicmooshka.bsky.social/post/3mbyyc2lhg22s
The person who wrote it apparently!
@lokeloski AI generation is a useful facimile in a place where nothing would have also been a more or less acceptable alternative.
Which begs the question as to why we're wasting so much money on it.
This is why CEOs assume it can do everything, because they don't know how to do anything.
@lokeloski
I recently went to an opera where the composer was not only present but also performing as one of the soloists, among five other vocalists, along with a men's choir, accompanied by a full orchestra.
The backdrop to this rich contribution to human musical art was AI visuals projected onto a screen.
it seems like each department says that AI can be useful in every field except the one that they know best.
it's only ever the jobs we're unfamiliar with that we assume can be replaced with automation.
The more attuned we are with certain processes, crafts and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to be everything we don't. 2/2
@lokeloski Very well put. To me, this is similar to the Gell-Mann amnesia effect, where for subjects we have deep knowlege about, we see all the flaws in media reports, but tend to assume that for all other subjects, the media reports are basically fine. @davidgerard
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfla1
@Tattie @davidgerard @geeeero @lokeloski
Well, he is smart. In his field. Like you may be in your fields. It's not possible for a human brain to be smart in evrything.
@davidgerard @geeeero @lokeloski I'm not *demanding* anything, I was just asking 💀
I read the Wikipedia article and it didn't seem like trash to me, it says it hasn't been formally recognised but that it's "gained traction in critical thinking and media literacy discussions"
@hazelnot @geeeero @lokeloski and it cites a total of one thing
hence the notability tag
i put a note on the talk page that this is based on almost nothing except a vastly overlength quote and the few sentences in the crichton article do exactly the same job
@davidgerard @geeeero @lokeloski
The story may be made up but the effect is real. I started noticing it in journalism in the '90s... journalists often seemed authoritatively good at stuff I didn't know anything about, but as soon as they started writing about the Internet, or anything else that was at the time a bit esoteric but I know a lot about, their stuff was obvious twaddle.
See also Knoll's Law:
@mathew @resuna @davidgerard @lokeloski In the arena of science, physicists are similar:
@lokeloski I’ve seen this attitude even in some highly skilled people.
The idea that what they’re doing is obviously complex and requires deep knowledge and skills, but work that others are doing is obviously trivial. Very surprising.
It’s not uncommon for undergraduates to assume some field is easy, because the introductory course they had on it was, but for accomplished professors to have similar ideas about fields outside of their expertise? Why? Is there a psychologist in the house?