LLM/AI do not understand me…
This is totally unscientific and purely speculative…
I my personal experience I am struggling a lot to let understand LLM/AI what I want. I do believe to explain myself clearly but I don't get the desired outcome.
Thus I have started to believe this models are calibrated to understand linear thinking specifically. I have an artistic education and I am graduated in Fine Arts, I spent all my years of education developing the right hemisphere of my brain (and this is why a struggle a lot with coding), therefore when I write a prompt to these LLM they don't grasp my non-linear thoughts.
And this is why them struggle to do the task I assigned them and I struggle to get my job done properly...
Perhaps it might be true… 🤔