@catsalad @syn A recent Condition Report (CR) showed up in our Corrective Action Program wherein an engineer who had been using some form of agent in their development environment (allowed) connected to our controlled compute (production HPC) environment. A second engineer noticed the agent running in the controlled environment (not allowed) and the first engineer killed the process.
The controlled environment is where we run qualified safety-related reactor design and analysis codes (e.g. RELAP5, MELCOR).
So yes, this is becoming an issue and it's being handled with the same care and attention as every other unexpected and questionable condition we run into in the course of designing and licensing a power reactor.
The primary issue here is not safety (though that is always evaluated), the main concern is about leakage of Export Controlled Information (ECI) to unauthorized parties. Most of our technical work is considered ECI; the definition of ECI is vague and overbroad but the law on improper disclosure is draconian (possibility of Federal jail time regardless of intent).
To be absolutely clear, this does not involve a literal operating power plant or any plant instrumentation or control systems; this affected a design and analysis environment.
As these sorts of issues occur in a context, we have to look beyond the individuals involved and look for systemic organizational issues that allowed or encouraged(?) this incident. Internal AI use policy is short and IMO sensible, focusing mainly on avoiding IP/ECI leakage and preventing bias-laundering in HR use. We know these systems are technically unreliable and have serious race/gender/etc. biases built in - the policy is crystal clear that individuals are responsible for ensuring their work is technically accurate and bias-free regardless of the tools they use.
But beyond policy, there's a pragmatic angle that (IMO) is not getting enough attention. Assume good faith among local staff snd sensible policy. If one is well-informed and guards against the inadvertent use of chatbots and agentic tools due to their known problems and the sensitive nature of safety-related work, how are they supported by an IT organization that deploys more snd more AI-ridden applications with no easy or obvious way to disable AI capabilities?
If IT provides or approves tools with AI capabilities enabled by default or with no means for users to disable AI if it's not appropriate to their work, how is IT not culpable for degrading our ability to work responsibly and within policy?
My principal role is safety-related software QA and V&V. I spend most of my time identifying use cases and safety functions, developing critical characteristics and requirements for engineering analysis software, and ensuring thise requirements are met through tests & inspections. I'm responsible for ensuring the software used by our engineers does what they need, operates as advertised, and that through training and system configuration that engineers are not being set up to fail.
Does IT not perform a similar and appropriately rigorous software selection and evaluation process for the tools they deploy? Is AI controlability not a product requirement? If it is, why are these systems being deployed in a vulnerable/dangerous state with no user guidance on making them safe (i.e. disabling AI)? Are these requirements being passed to vendors and what are the vendor responses?
Pragmatically, I think we all know the answers to these questions. I'm confident that what I do serves our organization. I'm not sure that (as an industry) IT can say the same or that this even concerns them.
but.. but.. but.. that will mean that we have to completely revamp our feature request process...
we do it all backwards. it's not even the tail wagging the dog. it's someone grabbing the tail and using it to pound the poor dog against the wall.
we don't talk to users, don't understand our users, ignore actual feedback that we do get from our users, then sit there wondering why users don't seem to love the shiny widget we just shipped.

Has anyone tried the improved Magic 9 ball?
@catsalad Me: Magic 8 ball - am I stupid?
Coconut: ...
It uses a small language model.