Computational functionalism is such an unreasonable axiom for a property that is inherently tied to internal experience and thus to intensionality. And if I see one more paper like "we assume computational functionalism" when writing about purported potential consciousness of AI systems I am going to lose it, because the massive leap of faith is in declaring computational functionalism remotely reasonable, let alone true.









