"So if AI detection becomes impossible, we will have to assume humanity just to operate normally. As I mentioned, this is serving me relatively well in editing and marking, I will assume that if something has someone’s name or signature, they wrote it, and they should assume all of the consequences of that text.
For the same reason, I don’t think that any sort of legislative solution will work. The technology is too far ahead to expect any sort of ban. We could probably try to enact legislation that sets the obligation for LLM developers to clearly identify when an AI has been used to generate text, but this would only open the door for models that have been trained in countries without such restrictions to become popular. And then there will probably be AI humanisers that will get rid of such identifiers.
A solution that appears to be emerging in many writing circles is to loudly attack anyone who is using AI text, and to try to gather consensus in the writing professions to loudly oppose any sort of AI use. Writers are now at the stage in which artists were back in 2022, AI is just about to get good enough as to threaten people’s jobs. So there is a bit of a siege mentality emerging, where the first instict will be to punish and ostracise anyone who breaks this code. I’m highly skeptical of this approach as it is likely to lead to witch-hunts, false accusations, purity spirals, and other nasty online behaviour that is not likely to fix the problem.
Eventually, I think that we will find some balance."
https://www.technollama.co.uk/why-are-people-adopting-ai-to-write



