If anyone is in the awkward position of being anti-AI and also needing to make a policy that does not disallow usage of it, my framework for such a policy is:
1. Discuss all the company values - there should be something in there about innovation, responsibility or teamwork or something like that. Don't be afraid to carefully highlight the parts of the values needed to fit the policy into the values.
2. Discuss how AI is a tool and how usage of it can fit into the values framework: specifically point out that AI use can be innovative, but one must be responsible for ensuring that good work is produced with it, like any other tool.
3. And finally the big dot-points for the policy:
* Usage of AI can be innovative and exploring it is encouraged
* Usage of AI must be in accordance with company values, i.e. people must be responsible, output must match accepted norms, etc.
* Usage of AI must be evaluated regularly: is this usage of AI actually benefiting the company as a whole?
IMHO this is the best balance between encouraging people who are AI curious, giving you a framework to reign in those who are leaning on it too much, and allowing those who refuse to use it the freedom to keep doing that.
My biggest frustration with this whole LLM nonsense is that there are enough "true believers" that you cannot escape people using it and encouraging people to use it that you cannot legitimately ban it outright, even if you have very good reasons for that. So the only path forwards is to make sure that any rules around the usage of it cover all three streams of people: people already using it, people who are curious, and people who don't; and make sure they're all nudged into being responsible for what they produce.
#ai #it #policy #tech #technology