The problem is that the tool fails the fundamental rule of "should I use AI to solve this problem".
AI is safe to use *only* if you meet the following criteria:
1) The output doesn't matter (e.g. writing a story for your kids is safe)
2) Someone qualified assesses the veracity of the output prior to using its decision in a way that could cause harm.
The power of AI is its ability to *enhance* humans to make decisions, process information, and automate time/mentally consuming but generally pretty low-stakes tasks (like drafting an email).
But when you step past that into delegating life-changing decisions to an AI, either explicitly (loan company says no) or implicitly (magic box says 12% yes and human operator develops rule of thumb to press "go" if number > 10), you're going to run into a world of Kafkaesque garbage really quickly.