If I think of all the levers I could pull to improve the performance of a software development team, the data's clear that "AI" code generation has so little leverage where team outcomes are concerned that I doubt it would even be on the list.
A lot of security is based on trust. Trust relies on competence. The security theatre I get from a lot of sites and apps, sometimes elaborated through MFA, does not inspire such trust.
That device you tell me is unrecognised? It's the one I've used to access the app every day for at least the last year.
If you want to convince me your app is secure, start with competence. Poorly engineered products don't do that. KPI-driven product staff don't do that.
"We can't afford understanding the code we're deploying to production because that way we can't keep up with the pace at which LLMs generate code."
That's a tail wagging the dog instead of the opposite.