31% of employees are ‘sabotaging’ your corporate AI strategy!

https://awful.systems/post/5467923

31% of employees are ‘sabotaging’ your corporate AI strategy! - awful.systems

not enough AI? right to jail. too much AI? believe it or not, also jail https://www.youtube.com/watch?v=Abx6iekuN18&list=UU9rJrMVgcXTfa8xuMnbhAEA [https://www.youtube.com/watch?v=Abx6iekuN18&list=UU9rJrMVgcXTfa8xuMnbhAEA] - video https://pivottoai.libsyn.com/20250831-31-of-employees-sabotaging-corporate-ai-strategy [https://pivottoai.libsyn.com/20250831-31-of-employees-sabotaging-corporate-ai-strategy] - podcast

Refusing to use AI tools or output. Sabotage!

Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

I work in the field of law/accounting/compliance, btw.

Even better to do the correspondence in written.
That way you can show others (and hope that someone cares) what you rejected and what they were trying to push.
This may only be a problem if the people in charge don’t understand why it’s wrong. “But it sounds correct!” etc.

Not a problem.

If it manages to stay in history, hopefully someone after the next dark ages will read it and give you vindication.

You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.
Maybe it’s also considered sabotage if people (like me) try prompting the AI with about 5 different questions they are knowledgeable about, get wrong answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it’s expected to try and try again with different questions until one correct answer comes out and then use that one to “evangelize” about the virtues of AI.
This is how I tested too. It failed. Why would I believe it on anything else?