I don't really want to focus too much on this anymore, but I'm still in a research web, rereading this slightly older critique of IIT

https://www.sciencedirect.com/science/article/pii/S105381001830521X

"The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness", A. Doerig, et al, 2019

@axoaxonic although interesting, I find the main theme of this paper slightly off and non definitive. "The system with phi=0 is unconscious yet empirically undistinguishable from the system with phi>0, then IIT is empirically unfalsifiable" would be a correct reasoning iff the two systems were empirically equivalent for every i/o function and i/o functions were the only functions they performed and the only empirical observables we could observe. 1/2
@axoaxonic but it's not hard to imagine e.g. two robots, one performing the i/o function with phi=0 and one performing it with phi>0, withstanding our possibility to crack open both robots and structurally see how one instantiates in its hardware the causal structure allowing phi>0.
Am I missing something big? Or small?
I reckon this is also why I don't think IIT is *really* idealist or panpsychist. You may have any material substrate but you can't have any material structure, for phi>0 2/2
@fab13 Oh also, section 2.7 of the "Unfolding Argument" paper pretty much addresses your point directly
@axoaxonic I'll get back to the whole paper and this section specifically!
I think I'm getting misguided by a few things: on the one hand, I don't know any theory that could test the presence of subjective experience based on i/o functions without circular reasoning like "we measured theta>threshold which in our theory corresponds to conscious experience"; on the other hand I'm biased to thinking at least vertebrates are conscious, thanks to the neural structures enabling their neural functions
@axoaxonic these considerations should make me agree with the "unfolding paper", but without an adversarial attitude towards IIT in particular and overlooking the details of the unfolding argument.
So for any function, at least two structures, one implying positive phi and the other with 0 phi, can perform it. There's no way to prove the one with high phi is conscious, but this unprovability would be there regardless, right?
If there were only one structure and had high phi, still circular

@fab13 I don't feel too adversarial against IIT, there are useful aspects to it, like how conscious experience has to be somehow related to someone's experience and the structures of their brain etc, and information integration has interesting mathematical/info theory aspects, although I find information geometry more interesting. It just doesn't really seem to say much about consciousness, like I can't relate my level of info to what it's like to be me other than maybe how much experience I can have.

It's also requires a concept space, which implies consciousness is semantically structured, which might relate more to thought than phenomenal awareness. I believe, following ideas from Thomas Metzinger, Alva Noƫ, Daniel Hutto and others, that someone can be conscious of something through direct perceptual experience without having to have a concept for it. Like, an experiment could happen where people are given 3D printed surreal objects that are very unlike something they've encountered before: would they have to generate concepts out of generalizations of past experienced objects about it in order to be conscious of it? Or is simply letting them perceive it/interact with it enough