Just a heads up for anyone who may use this in an argument. I just tested on several models and the generated response accounted for the logical fallacy. Unfortunately it isn’t real.

( Funny non-the less )

Tested on GPT-5 mini and it’e real tho?

Man, I really hate how much they waffle. The only valid response is “You have to drive, because you need your car at the car wash in order to wash it”.

I don’t need an explanation what kind of problem it is, nor a breakdown of the options. I don’t need a bulletpoint list of arguments. I don’t need pros and cons. And I definitely don’t need a verdict.

Yeaaah they waffle a lot, i hate that
You can actually fix this in the settings there’s an option for permanent prompt tunings and you can add things like “focus on concise answers” or my favorite " i don’t need to be glazed , I don’t need to be told that it’s an insightful question or reaches the heart of the matter. Just focus on answering the question"
I’ve found some success in ussing system prompts or similar to ignore explaining things lol
It’s the illusion of reason

I’ll also accept sarcasm.

“Unless you’ve successfully trained your car to follow you like a loyal golden retriever, you’re probably going to have to drive.”

They are trained to yap because it gives them a higher likelihood of giving the correct answer. If they don’t go on and on in user presented text, it at least does it in hidden text.