Speaking of training Ai...
... wait till you use office 365 or Google Mail 😑
@bert_hubert I think mastodon's LLM policy is a good one, iiuc: when you submit a PR you are responsible for it.
If you produce good code with an LLM and review it, that's fine. If you submit LLM generated slop you get ignored.
A good LLM-generated PR should be indistinguishable from a good human one.
@bert_hubert perhaps I did :)
My response goes from: how do you evaluate "this PR is LLM generated so I will reject it a priori"?
What I'm saying is that if it appears LLM generated than it does not "look good" and so it's fine to reject it.
Otherwise, I think if a dev took time to edit it, it's just like another one.
@bert_hubert If I were still working in software development I think I’d be right there with you on that one.
But I did find the Register’s recent interview with Greg Kroah-Hartman interesting:
https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
Obviously, blatant AI/LLM slop deserves to go straight in the bin but I’m curious what has caused a seeming step change in the quality of AI bug reports, etc. Have the LLMs suddenly improved or are people finally actually checking what they produce?