Yeah so let's refocus on the real perspective here. It doesn't matter at all how good these models are at code or finding vulnerabilities if we destroy our ability to seek and share knowledge.

https://futurism.com/artificial-intelligence/google-ai-overviews-misinformation

Analysis Finds That Google’s AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in the History of Human Civilization

A new analysis commissioned by The New York Times suggests that Google's AI Overviews are wrong an astonishing percentage of the time.

Futurism

And look: I've spent time exploring the capabilities of these tools because I seek understanding through experience. I've been called spineless, fascist, racist, and just plain stupid for doing so. What I learned was important to my opposition to these tools, but also to my empathy for its users.

But the core truth remains. You cannot have what works without the attending toxins. They are inextricable. As ever, my primary contention is that the technology is destructive to the fabric of human society, and on those grounds should we make our stand.

@mttaggart
Yeah I keep mostly quiet about how I use them to study them, none of that code makes it into my packaged work, but you do have to sit at the slot machine to know its pull, and you do have to be able to understand the subtleties of the things it does and does not do to be credible in opposing it on a technical basis.

Its a big and broad universe and there are lots of different contexts, applications and means of understanding things, and there will never be a time when I allow the informational violence they do into my space, but the code production facet is sort of shoved into my face and I have to take a different approach.