With this specific example I have to agree.
I was once on a team that had to do product testing in college and our goal was to produce novel results, but the two most domineering people in the group talked the rest into doing a test that, according to our literature, had already been performed before.
But I'd gotten in trouble for being too blunt earlier, so I didn't say so outright. I waffled and hedged instead and was overruled.
We ended up doing 12 weeks of useless work.
Sometimes it's better to soften your language so that your message will be better received, sometimes it's better to be blunt and say 'this will never work' and then explain in detail why.
An AI/LLM will never be able to tell which of the two is appropriate in a given situation.
Using an AI to soften your language to the level allistics (sometimes) prefer will reduce how often you upset them and get yelled at. It will not, however, reliably help you to get your message across.