I fully agree that using automated tools for tasks you don't know how to do manually is bad because you don't know what you are doing.
The area where I see the most potential for AI generated code is for unit tests. But before doing that we need better tools for evaluating some sanity aspects of unit tests.
Code coverage is one measurable metric, but I would take it a step further and not just require each line to be covered by tests. Instead I want each conditional in the code to be tested with both a true and a false value.
Moreover I want it to be such that if you actually negate a condition in the code itself, there must be a unit test failing. And if a particular test case passed regardless of what modification was being made to the code being tested, then that test case was not particular useful in the first place.
If generated test cases satisfy all of that, then there is a chance that reviewing those test cases could be less work than writing the from scratch yourself. All of this is of course hypothetical, as I have not yet seen an AI as capable as what I describe (and I haven't been looking for one either).
But never send them to somebody else for review without reviewing them first for yourself.