“ChatGPT experiences artificial qualia.” No. This doesn’t make sense. This doesn’t even qualify as an oxymoron because it categorically doesn’t even make sense.
This is like confusing a 2D object for having depth like a 3D object because they both look 2D when visually projected. Furthermore, no amount of 2D changes on the 2D object will transform it into a true 3D object. The 2D object can’t be described as shallow or deep because it still fundamentally lacks the dimension of depth. Contrast the oxymoron “shallow-deep” to this nonsense “narrow-depth.”
Being wrong isn’t even the issue at this point. It’s the arrogance. This problem is legitimately called the ‘(whole) hard problem.’ There’s no singularity coming through LLM.