Thinking about this again and I think when these tools encounter ambiguity, from the perspective of their logic when there's no way to verify correctness from the training data, instead of coming back to the human and asking them to make a judgment they keep cycling or hallucinate an "answer"
The traditional way of resolving this would be for the human to make a choice they take responsibility for, but since these tools are partially accountability sinks they're not designed for that
@sue
I believe you’re spot-on, this whole thread.
A VC firm provided a (larger than us) company the funds to buy us, & now has a controlling interest in the new whole. We’re being forced to use AI to code. I suspect the VC also has AI investment, the way they ignore arguments about protecting our IP.
It’s gut-wrenching at times, the short-sightedness.
But your story jibes 100%.