What is a non-functional account of consciousness supposed to be?

I’m a functionalist. I think the mind and consciousness is about what the brain does, rather than its particular composition, or some other attribute. Which means that if another system did the same or similar things, it would make sense to say it was conscious. Consciousness is as consciousness does.

Functionalism has some advantages over other meta-theories of consciousness. One is that since we’re talking about functionality, of capabilities, establishing consciousness in other species and systems is a matter of establishing what they can do. But it does require accepting that consciousness can come in gradations. And that “consciousness” is not a precise designation of which collection of functionality is required. So it means giving up primitivism about consciousness, accepting that rather than a single natural kind, it’s a hazy collection of many different kinds.

It’s worth pausing to be clear on what functionalism is. It’s about cause-effect relationships. These relationships can, in principle, be modeled by Ramsey sentences, a technique David Lewis adapted from Frank Ramsey, which models a causal sequence, or entire structures of those sequences. (Suzi Travis has an excellent post which includes an introduction to them.) At the heart of the entire enterprise are these cause-effect relations.

Of course, cause-effect relations are themselves emergent from the symmetrical (reversible) structural relations of more fundamental physics. Causes and effects attain their asymmetry due to the Second Law of Thermodynamics, the one that says entropy always increases. So another way to talk about functionalism is in terms of structural realism. Ultimately functionalism is about structural relations. (Something it took me a while to appreciate after discovering structural realism.)

Over the years, I’ve received a lot of different reactions to this position. Not a few aren’t sure what functionalism is. Some are outraged by the idea. Others equate it with behaviorism. (Unlike behaviorism, functionalism accepts the existence of intermediate states between stimuli and response.)

But occasionally someone responds that the idea is obvious and trivial. I think this response is interesting, because I basically agree. It is trivial, or it should be. I only started calling myself a functionalist because so many people insist that the real problem of consciousness isn’t about functionality.

Philosophers have long argued for a version of consciousness that is beyond functionality. Ned Block, when making his distinction between phenomenal and access consciousness, while admitting there were functional notions of phenomenal consciousness, argued for a version that was something other than functionality (or intentionality, which is also relational). And David Chalmers argues that solving the hard problem of consciousness isn’t about solving the structure and relations that science can usually get a handle on.

Anyone who’s known me for a while will be aware that I think these views are mistaken. But I have to admit something. Part of the reason I’m not enthusiastic about them is I don’t even know what a non-functional view of consciousness is supposed to be.

I understand old school interactionist dualism well enough. But in that case there are still causes and effects. It’s just that most of them are hidden from us in some kind of non-physical substrate. But the interaction in interactionist dualism should be detectable by science, and hasn’t been, which I think is why many contemporary non-physicalists gravitate to other options.

It’s when we get to views like property dualism and panpsychism that I start to lose understanding. We’re supposed to be talking about something beyond the functionality, beyond structure and relations, something that could be absent without making any difference in functionality (philosophical zombies), that could change without change in functionality (inverted qualia), or is in principle impossible to observe from any perspective other than the subject’s (Mary’s room). It’s not clear to me what exactly it is we’re talking about here.

This view has epiphenomenal implications, that consciousness is causally impotent, making no difference in the world. It’s interesting that the arguments to avoid this implication inevitably sneak functionality back into the picture. One option, explored by David Chalmers in his book: The Conscious Mind, is that consciousness is causality, which strikes me as a very minimal form of functionalism. Another, one Chalmers favors, is the Russellian monist notion that consciousness, or proto-consciousness, sits in the intrinsic properties of matter, and is basically the causes behind the causes, which again, seem to amount to a form of hidden functionalism.

But these arguments aside, it’s still unclear what exactly it is we’re talking about. It’s frequently admitted that no one can really say what it is. However, it’s typically argued that we can point to various examples to make it clear, such as the redness of an apple, the painfulness of a toothache, seeing black letters on a white page, the taste of a fruit juice, imagining the Eiffel tower, etc.

The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with. Black letters on a white page is pattern recognition to parse symbolic communication. The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.

I’ve read enough philosophy to know the usual response. That’s I’m identifying the functional aspects of these experiences, but that the functional description leaves out something crucial. My question is, what? Of course, I know the typical response here too. It’s ineffable. It can’t be described or analyzed. Ok, how do we know it’s there? Each of us supposedly has first person access to it. But I just indicated that my own first person access seems to indicate only functionality. Impasse.

So I’m a functionalist, not just because I think it’s a promising approach, but because I really don’t understand the alternatives. Could I be missing something? If so, what?

#conscioiusness #functionalism #phenomenalConsciousness #Philosophy #PhilosophyOfMind

Functionalism (Stanford Encyclopedia of Philosophy)

Is studying conscious experience different from studying behavior?

In a number of recent conversations I’ve had, the distinction between experience and behavior has come up. There’s a strong sentiment that we can study behavior scientifically, including all the intermediate mental states that enable it. But experience is seen as something distinct from that, something that is much more difficult to study.

This behavior / experience divide matches the distinction David Chalmers makes between the “easy” problems and the “hard” problem. The easy problems aren’t really easy, but they are scientifically tractable, mainly because they’re all about functionality, the functionality that enables behaviors such as self report, navigation, object recognition, etc. But the hard problem is the one of experience, which is hard mainly because it’s not supposed to be about functionality and behavior.

Of course, a lot here depends on what we mean by “experience”. There’s a grounded sense of the term, which is what we mean when someone has been through an activity or series of activities that allowed them to learn things. It’s what job recruiters mean when they advertise that a particular position needs X years of “experience”. In that sense, experience is about learning through behavior that enables and explains future behavior.

This doesn’t have to be something that’s stretched across years. If I walk to the mailbox, I have the sensorimotor experience of doing that, which will result in at least temporary memories of how sunny or cloudy a day it is, the temperature outside, whether it’s raining, whether the mail is running late today, etc, along with reviewing my preferences about these conditions. If I have the experience of a headache, or a tasty meal, we can talk about it in the same sense.

So this grounded sense of “experience”, which is structural, relational, and functional, seems to cover a lot of territory. The question is whether it covers all the territory, or if there’s a remaining aspect we’re leaving out.

Often philosophers will talk about “what it’s like”, the “raw feels”, the subjective character, the phenomenal properties, the qualities (qualia) of the experience. Of course, as with “experience”, there are grounded versions of what these phrases could be referring to. But that’s generally not what’s meant. Instead the sentiment is that this is something primal, indescribable, unanalyzable, and scientifically inaccessible, a brute fact of existence. It can’t be described, only referenced, with each of us accessing only our own private versions.

The view is that all the behaviors described above could, in principle, happen without this additional form of experience. The putative mystery is why we have these types of experiences at all. Thomas Nagel, a pioneer in discussing this sense of experience, beginning with asking what it’s like to be a bat, agrees with many of the critics that evolution can only work with behavior and what enables it, not this private ineffable essence. His conclusion then is that evolution can’t explain experience in this sense. It’s a line of reasoning that makes the idea of latent experience existing everywhere appealing.

But there is a logical consequence of this view. It means experience is completely acausal, epiphenomenal, something that makes no difference in the world. Note that this would include the behavior of talking about it. Some advocates of the view, such as early Frank Jackson, embrace this implication. There is sometimes talk in this camp of concepts like psychophysical harmony, the idea that the experiential and physical exist in separate but parallel causal frameworks. But outside of a theistic type framework, it doesn’t seem like a parsimonious view. Which is probably why most seem to resist this implication, although the arguments for avoiding it aren’t clear to me.

The question then is whether this type of experience exists at all. It seems like once we’ve reasoned ourselves into seeing something making no difference in the world, we’ve essentially concluded it doesn’t exist, but aren’t quite willing to let it go.

The illusionists, as we discussed in the last post, say it doesn’t exist, but concede that it’s natural, even unavoidable, for us to think it does. In this view, we go wrong by trusting too much in our introspective judgments. The right move is to doubt those judgments. I’m sympathetic to this stance, but increasingly reluctant to concede that we all have an innate disposition to believe in this kind of experience.

For me, it seems more about optional assumptions we make, rather than any unavoidable species wide instinct. For sure, we’re all born intuitive dualists, but this is usually of the old fashioned Cartesian sort, the type that relegates memory, imagination, and all thought to the non-physical, not the more limited property dualism under discussion. It seems more likely it results from remnants of those Cartesian intuitions. But maybe I’m just splitting hairs here.

I should note that denying this type of experience is not old school behaviorism. There’s no reason to deny internal mental states as the logical behaviorists did. Maybe in principle we could talk about everything in terms of behavioral dispositions, but it requires a lot of convoluted language. It’s much easier to just admit those internal states exist, as long as they’re causal ones. Which I think is the main reason analytic and empirical functionalism emerged as successors to behaviorism.

Someone could continue to believe in non-behavioral experience and just take the stance that the science is valid for the behavioral portions, but not addressing the aspects they’re interested in. This is the sense I get from someone like David Chalmers. Chalmers basically seems like a functionalist, but with an extra metaphysical assumption of something else that “coheres” with the functionality. It allows him to accept the possibility of conscious AI and simulated realities without going full physicalist. I see similar stances from some panpsychists.

Of course, that’s not universal. And holding on to the non-behavioral version seems to affect the types of scientific theories someone finds plausible. It’s why theories like integrated information theory are more popular among panpsychists than straight functional ones like global workspace, higher order thought, etc.

Overall, studying behavior, along with everything that enables it, seems like a productive enterprise. If there is an aspect of experience unrelated to behavior, then it seems like an unsolvable metaphysical problem, something we’ll only ever be able to speculate about. To me, it seems exceedingly vulnerable to Occam’s razor.

But maybe I’m missing something. Are there necessities to accepting non-behavioral experience I’m overlooking? Are there solid arguments that allow experience to be scientifically inaccessible yet not epiphenomenal? If it is epiphenomenal, is there any way for science to ever get at it? Or even philosophy in any conclusive manner?

Featured image credit

https://selfawarepatterns.com/2024/09/15/experience-and-behavior/

#conscioiusness #phenomenalConsciousness #Philosophy #PhilosophyOfMind

File:Desmo-boden (cropped).jpg - Wikipedia