🧠✹ Unconscious experiences – How the #brain processes information

This short film from our #Zoomposium with #GodehardBrĂŒntrup highlights how #unconscious experiences, #attention, and expectation structures interact – and what this reveals about how our brain works and our self-image.

📎https://philosophies.de/index.php/2022/09/27/metaphysik-des-bewusstseins/

đŸŽ„ https://youtu.be/hoqiI_TElv4

#Consciousness #Cognition #PhilosophyOfMind #PhenomenalConsciousness #CognitiveNeuroscience #Perception #Thinking

The secret role of #pain – More than just a #feeling đŸ€’ ✹

What if pain were more than just a signal from the #body – what if it were a fundamental condition of #consciousness itself?

In our #Zoomposium with #ThomasFuchs, we discuss the fascinating question of why pain not only “hurts” but is also functionally necessary.

#Embodiment #Phenomenology #PhenomenalConsciousness #EmbodiedConsciousness #PhilosophyOfMind #CognitiveNeuroscience

🧠 #Cognition is embodied đŸ«ƒ

Perception, thought, and feeling are inextricably linked to the #body.

In the #Zoomposium with Thomas Fuchs, we talk about #embodiment, #consciousness, and #AI—and ask: Does the #body play a role in decision-making?

đŸ“œ Interview: https://youtu.be/1ouxs6P3Enc

📎 Information: https://philosophies.de/index.php/2022/11/20/das-verkoerperte-bewusstsein/

#ThomasFuchs #Phenomenology #PhenomenalConsciousness #EmbodiedConsciousness #PhilosophyOfMind #CognitiveNeuroscience #CognitiveScience #Enactivism #Neuroconstructivism

What is it like to be you?

In 1974, in a landmark paper, Thomas Nagel asks what it’s like to be a bat. He argues that we can never know. I’ve expressed my skepticism about the phrase “what it’s like” or “something it is like” before, and that skepticism still stands. I think a lot of people nod at it, seeing it as self explanatory, while holding disparate views about what it actually means.

As a functionalist and physicalist, I don’t think there are any barriers in principle to us learning about the experience of bats. So in that sense, I think Nagel was wrong. But he was right in a different sense. We can never have the experience of being a bat.

We might imagine hooking up our brain to a bat’s and doing some kind of mind meld, but the best we could ever hope for would be to have the experience of a combined person and bat. Even if we somehow transformed ourselves into a bat, we would then just be a bat, with no memory of our human desire to have a bat’s experience. We can’t take on a bat’s experience, with all its unique capabilities and limitations, while remaining us.

But the situation is even more difficult than that. The engineers hooking up our brain to a bat’s would have to make a lot of implementation decisions. What parts of the bat’s brain are connected to what parts of ours? Is any translation in the signaling necessary? What if several approaches are possible to give us the impression of accessing the bat’s brain? Is there any fact of the matter on which would be “the right one”?

Ultimately the connection between our brain and the bats would be a communication mechanism. We could never bypass that mechanism to get to the “real experience” of the bat, just as we can never bypass the communication we receive from each other when we discuss our mental states.

Getting back to possible meanings of WIL (what it’s like), Nagel makes an interesting clarification in his 1974 paper (emphasis added):

But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.

This seems like a crucial stipulation. It is like something to be a rock. It’s like other rocks, particularly of the same type. But it’s not like anything for the rock. (At least for those of us who aren’t panpsychists.) This implies an assumption of some degree of metacognition, of introspection, of self reflection. The rock has overall-WIL, but no reflective-WIL.

Are we sure bats have reflective-WIL? Maybe it isn’t like anything to be a bat for the bat itself.

There is evidence for metacognition in mammals and birds, including rats. The evidence is limited and subject to alternate interpretations. Do these animals display uncertainty because they understand how limited their knowledge is? Or because they’re just uncertain? The evidence seems more conclusive in primates, mainly because the tests can be sophisticated enough to more thoroughly isolate metacognitive abilities.

It seems reasonable to conclude that if bats (flying rats) do have metacognition, it’s much more limited than what exists in primates, much less humans. Still, that would give them reflective-WIL. It seems like their reflective-WIL would be a tiny subset of their overall-WIL, perhaps a very fragmented one.

Strangely enough, in the scenario where we connected our brain to a bat’s, it might actually allow us to experience more of their overall-WIL than what they themselves are capable of. Yes, it would be subject to the limitations I discussed above. But then a bat’s access to its overall-WIL would be subject to similar implementation limitations, just with the “decisions” made by evolution rather than engineers.

These mechanisms would have evolved, not to provide the bat with the most complete picture of its overall-WIL, but with whatever enhances its survival and genetic legacy. Maybe it needs to be able to judge how good its echolocation image is for particular terrain before deciding to fly in that direction. That assessment needs to be accurate enough to make sure it doesn’t fly into a wall or other hazards, but not enough to give it an accurate model of its own mental operations.

Just like in the case of the brain link, bats have no way to bypass the mechanisms that provide their limited reflective-WIL. The parts of their brain that process reflective-WIL would be all they know of their overall-WIL. At least unless we imagine that bats have some special non-physical acquaintance with their overall-WIL. But on what grounds should we assume that?

We could try taking the brain interface discussed above and looping it back to the bat. Maybe we could use it to expand their self reflection, by reflecting the brain interface signal back to them. Of course, their brain wouldn’t have evolved to handle the extra information, so it likely wouldn’t be effective unless we gave them additional enhancements. But now we’re talking about upgrading the bat’s intelligence, “uplifting” them to use David Brin’s term.

What about us? Our introspective abilities are much more developed than anything a bat might have. It’s much more comprehensive and recursive, in the sense that we not only can think about our thinking, but think about the thinking about our thinking. And if you understood the previous sentence, then you can think about your thinking of your thinking of
.well, hopefully you get the picture.

Still, if our ability to reflect is also composed of mechanisms, then we’re subject to the same “implementation decisions” evolution had to make as our introspection evolved, some of which were likely inherited from our rat-like ancestors. In other words, we have good reason to view it as something that evolved to be effective rather than necessarily accurate, mechanisms we are no more able to bypass than the bat can for theirs.

Put another way, our reflective-WIL is also a small subset of our overall-WIL. Aside from what third person observation can tell us, all we know about overall-WIL is what gets revealed in reflective-WIL.

Of course, many people assume that now we’re definitely talking about something non-physical, something that allows us to have more direct access to our overall-WIL, that our reflective-WIL accurately reflects at least some portion of our overall-WIL. But again, on what basis would we make that assumption? Because reflective-WIL seems like the whole show? How would we expect it to be different if it weren’t the whole show?

Put yet another way, the limitation Nagel identifies in our ability to access a bat’s experience seems similar to the limitation we have accessing our own. Any difference seems like just a matter of degree.

What do you think? Are there reasons to think our access to our own states is more reliable than I’m seeing here? Aside from third party observation, how can we test that reliability?

#Consciousness #introspection #metacognition #phenomenalConsciousness #Philosophy #PhilosophyOfMind

Dr Peter Sjöstedt-Hughes

Philosopher of Mind and Metaphysics

Dr Peter Sjöstedt-Hughes

What is a non-functional account of consciousness supposed to be?

I’m a functionalist. I think the mind and consciousness is about what the brain does, rather than its particular composition, or some other attribute. Which means that if another system did the same or similar things, it would make sense to say it was conscious. Consciousness is as consciousness does.

Functionalism has some advantages over other meta-theories of consciousness. One is that since we’re talking about functionality, of capabilities, establishing consciousness in other species and systems is a matter of establishing what they can do. But it does require accepting that consciousness can come in gradations. And that “consciousness” is not a precise designation of which collection of functionality is required. So it means giving up primitivism about consciousness, accepting that rather than a single natural kind, it’s a hazy collection of many different kinds.

It’s worth pausing to be clear on what functionalism is. It’s about cause-effect relationships. These relationships can, in principle, be modeled by Ramsey sentences, a technique David Lewis adapted from Frank Ramsey, which models a causal sequence, or entire structures of those sequences. (Suzi Travis has an excellent post which includes an introduction to them.) At the heart of the entire enterprise are these cause-effect relations.

Of course, cause-effect relations are themselves emergent from the symmetrical (reversible) structural relations of more fundamental physics. Causes and effects attain their asymmetry due to the Second Law of Thermodynamics, the one that says entropy always increases. So another way to talk about functionalism is in terms of structural realism. Ultimately functionalism is about structural relations. (Something it took me a while to appreciate after discovering structural realism.)

Over the years, I’ve received a lot of different reactions to this position. Not a few aren’t sure what functionalism is. Some are outraged by the idea. Others equate it with behaviorism. (Unlike behaviorism, functionalism accepts the existence of intermediate states between stimuli and response.)

But occasionally someone responds that the idea is obvious and trivial. I think this response is interesting, because I basically agree. It is trivial, or it should be. I only started calling myself a functionalist because so many people insist that the real problem of consciousness isn’t about functionality.

Philosophers have long argued for a version of consciousness that is beyond functionality. Ned Block, when making his distinction between phenomenal and access consciousness, while admitting there were functional notions of phenomenal consciousness, argued for a version that was something other than functionality (or intentionality, which is also relational). And David Chalmers argues that solving the hard problem of consciousness isn’t about solving the structure and relations that science can usually get a handle on.

Anyone who’s known me for a while will be aware that I think these views are mistaken. But I have to admit something. Part of the reason I’m not enthusiastic about them is I don’t even know what a non-functional view of consciousness is supposed to be.

I understand old school interactionist dualism well enough. But in that case there are still causes and effects. It’s just that most of them are hidden from us in some kind of non-physical substrate. But the interaction in interactionist dualism should be detectable by science, and hasn’t been, which I think is why many contemporary non-physicalists gravitate to other options.

It’s when we get to views like property dualism and panpsychism that I start to lose understanding. We’re supposed to be talking about something beyond the functionality, beyond structure and relations, something that could be absent without making any difference in functionality (philosophical zombies), that could change without change in functionality (inverted qualia), or is in principle impossible to observe from any perspective other than the subject’s (Mary’s room). It’s not clear to me what exactly it is we’re talking about here.

This view has epiphenomenal implications, that consciousness is causally impotent, making no difference in the world. It’s interesting that the arguments to avoid this implication inevitably sneak functionality back into the picture. One option, explored by David Chalmers in his book: The Conscious Mind, is that consciousness is causality, which strikes me as a very minimal form of functionalism. Another, one Chalmers favors, is the Russellian monist notion that consciousness, or proto-consciousness, sits in the intrinsic properties of matter, and is basically the causes behind the causes, which again, seem to amount to a form of hidden functionalism.

But these arguments aside, it’s still unclear what exactly it is we’re talking about. It’s frequently admitted that no one can really say what it is. However, it’s typically argued that we can point to various examples to make it clear, such as the redness of an apple, the painfulness of a toothache, seeing black letters on a white page, the taste of a fruit juice, imagining the Eiffel tower, etc.

The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with. Black letters on a white page is pattern recognition to parse symbolic communication. The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.

I’ve read enough philosophy to know the usual response. That’s I’m identifying the functional aspects of these experiences, but that the functional description leaves out something crucial. My question is, what? Of course, I know the typical response here too. It’s ineffable. It can’t be described or analyzed. Ok, how do we know it’s there? Each of us supposedly has first person access to it. But I just indicated that my own first person access seems to indicate only functionality. Impasse.

So I’m a functionalist, not just because I think it’s a promising approach, but because I really don’t understand the alternatives. Could I be missing something? If so, what?

#conscioiusness #functionalism #phenomenalConsciousness #Philosophy #PhilosophyOfMind

Functionalism (Stanford Encyclopedia of Philosophy)

Manifest and fundamental consciousness

I think the problem of consciousness is primarily one of definition. The word “consciousness” can refer to a range of concepts. Some of the concepts are scientifically tractable, while others, once we clarify them, are metaphysical assumptions that we can either choose to hold or dismiss. This is one of the reasons I find exploring and delineating these different concepts productive.

One distinction that’s been around for a few decades is Ned Block’s between phenomenal consciousness and access consciousness. Access consciousness is use of information for cognitive purposes, such as memory, attention, discrimination, self report, etc. Phenomenal consciousness is described as “raw experience”, the “what it’s like” aspect of consciousness, the character of the experience.

Access consciousness is the scientifically tractable version. But what about phenomenal consciousness? One of my concerns with the concept is it is itself ambiguous. In my view, “phenomenal consciousness” can refer to one of at least two concepts.

One is what I would call “manifest consciousness”, consciousness as it seems to us from the inside. Manifest consciousness seems irreducible, ineffable, and private. Indeed, strictly from a subjective perspective, it is irreducible. I can’t, from the inside, break down my experience of redness into any components. It’s just there. Describing it seems difficult. And it seems private to me. Yet I myself seem to have unfettered access to it.

Manifest consciousness is the seeming before any theoretical commitments. I think manifest consciousness is what Eric Schwitzgebel was aiming for when he developed his “innocent” definition of phenomenal consciousness. I do know it’s what I meant by the term on older posts prior to deciding that, without clarification, it’s a misleading use of it.

The problem is that most philosophers, both illusionists and phenomenal realists, seem to have a stronger meaning in mind. There are many theories about consciousness. One of the most straightforward is that the reality implied by the appearance is true, that manifest consciousness is a fundamental reality. Let’s call this “fundamental consciousness”.

Fundamental consciousness is the theory that consciousness not only seems irreducible, but is. That it’s not only difficult to describe, but impossible. That it’s not only difficult to observe from the outside, but fundamentally impossible. Which means that our first person access to it is privileged in some metaphysical manner.

I think manifest consciousness is what illusionists say is the illusion of fundamental consciousness. When they deny phenomenal consciousness, they aren’t denying manifest consciousness, but fundamental consciousness. But for weak phenomenal realists, phenomenal consciousness just is manifest consciousness.

On the other hand, strong phenomenal realists deny that there is any distinction between manifest and fundamental consciousness. For them, they are one and the same. So any denial of fundamental consciousness they take to be a denial of manifest consciousness, which seems incoherent.

This distinction can also be applied to synonymous concepts like qualia. When I argued for the existence of qualia some years ago, I was arguing for the manifest version, not the fundamental one. When I largely stopped using terms like “qualia” and “phenomenal” (except in replying to others using them), it was to avoid the confusion between these different versions.

Of course, as a reductionist, I think there are better theories than the fundamental one. In particular, we can see the concept of access consciousness itself as a meta-theory to explain manifest consciousness.

In any case, it seems like a lot of arguing past each other could be avoided if we acknowledged these distinct concepts. Most of the debate is about different theories of consciousness, not whether the manifest version exists.

But maybe I’m missing something? Are manifest and fundamental consciousness more difficult to separate than I’m thinking? Or are there additional distinctions we could use to further delineate the concept of phenomenal consciousness?

Featured image credit

#Consciousness #FundamentalConsciousness #ManifestConsciousness #phenomenalConsciousness #Philosophy #PhilosophyOfMind #Qualia

ON A CONFUSION ABOUT A FUNCTION OF CONSCIOUSNESS

Is studying conscious experience different from studying behavior?

In a number of recent conversations I’ve had, the distinction between experience and behavior has come up. There’s a strong sentiment that we can study behavior scientifically, including all the intermediate mental states that enable it. But experience is seen as something distinct from that, something that is much more difficult to study.

This behavior / experience divide matches the distinction David Chalmers makes between the “easy” problems and the “hard” problem. The easy problems aren’t really easy, but they are scientifically tractable, mainly because they’re all about functionality, the functionality that enables behaviors such as self report, navigation, object recognition, etc. But the hard problem is the one of experience, which is hard mainly because it’s not supposed to be about functionality and behavior.

Of course, a lot here depends on what we mean by “experience”. There’s a grounded sense of the term, which is what we mean when someone has been through an activity or series of activities that allowed them to learn things. It’s what job recruiters mean when they advertise that a particular position needs X years of “experience”. In that sense, experience is about learning through behavior that enables and explains future behavior.

This doesn’t have to be something that’s stretched across years. If I walk to the mailbox, I have the sensorimotor experience of doing that, which will result in at least temporary memories of how sunny or cloudy a day it is, the temperature outside, whether it’s raining, whether the mail is running late today, etc, along with reviewing my preferences about these conditions. If I have the experience of a headache, or a tasty meal, we can talk about it in the same sense.

So this grounded sense of “experience”, which is structural, relational, and functional, seems to cover a lot of territory. The question is whether it covers all the territory, or if there’s a remaining aspect we’re leaving out.

Often philosophers will talk about “what it’s like”, the “raw feels”, the subjective character, the phenomenal properties, the qualities (qualia) of the experience. Of course, as with “experience”, there are grounded versions of what these phrases could be referring to. But that’s generally not what’s meant. Instead the sentiment is that this is something primal, indescribable, unanalyzable, and scientifically inaccessible, a brute fact of existence. It can’t be described, only referenced, with each of us accessing only our own private versions.

The view is that all the behaviors described above could, in principle, happen without this additional form of experience. The putative mystery is why we have these types of experiences at all. Thomas Nagel, a pioneer in discussing this sense of experience, beginning with asking what it’s like to be a bat, agrees with many of the critics that evolution can only work with behavior and what enables it, not this private ineffable essence. His conclusion then is that evolution can’t explain experience in this sense. It’s a line of reasoning that makes the idea of latent experience existing everywhere appealing.

But there is a logical consequence of this view. It means experience is completely acausal, epiphenomenal, something that makes no difference in the world. Note that this would include the behavior of talking about it. Some advocates of the view, such as early Frank Jackson, embrace this implication. There is sometimes talk in this camp of concepts like psychophysical harmony, the idea that the experiential and physical exist in separate but parallel causal frameworks. But outside of a theistic type framework, it doesn’t seem like a parsimonious view. Which is probably why most seem to resist this implication, although the arguments for avoiding it aren’t clear to me.

The question then is whether this type of experience exists at all. It seems like once we’ve reasoned ourselves into seeing something making no difference in the world, we’ve essentially concluded it doesn’t exist, but aren’t quite willing to let it go.

The illusionists, as we discussed in the last post, say it doesn’t exist, but concede that it’s natural, even unavoidable, for us to think it does. In this view, we go wrong by trusting too much in our introspective judgments. The right move is to doubt those judgments. I’m sympathetic to this stance, but increasingly reluctant to concede that we all have an innate disposition to believe in this kind of experience.

For me, it seems more about optional assumptions we make, rather than any unavoidable species wide instinct. For sure, we’re all born intuitive dualists, but this is usually of the old fashioned Cartesian sort, the type that relegates memory, imagination, and all thought to the non-physical, not the more limited property dualism under discussion. It seems more likely it results from remnants of those Cartesian intuitions. But maybe I’m just splitting hairs here.

I should note that denying this type of experience is not old school behaviorism. There’s no reason to deny internal mental states as the logical behaviorists did. Maybe in principle we could talk about everything in terms of behavioral dispositions, but it requires a lot of convoluted language. It’s much easier to just admit those internal states exist, as long as they’re causal ones. Which I think is the main reason analytic and empirical functionalism emerged as successors to behaviorism.

Someone could continue to believe in non-behavioral experience and just take the stance that the science is valid for the behavioral portions, but not addressing the aspects they’re interested in. This is the sense I get from someone like David Chalmers. Chalmers basically seems like a functionalist, but with an extra metaphysical assumption of something else that “coheres” with the functionality. It allows him to accept the possibility of conscious AI and simulated realities without going full physicalist. I see similar stances from some panpsychists.

Of course, that’s not universal. And holding on to the non-behavioral version seems to affect the types of scientific theories someone finds plausible. It’s why theories like integrated information theory are more popular among panpsychists than straight functional ones like global workspace, higher order thought, etc.

Overall, studying behavior, along with everything that enables it, seems like a productive enterprise. If there is an aspect of experience unrelated to behavior, then it seems like an unsolvable metaphysical problem, something we’ll only ever be able to speculate about. To me, it seems exceedingly vulnerable to Occam’s razor.

But maybe I’m missing something. Are there necessities to accepting non-behavioral experience I’m overlooking? Are there solid arguments that allow experience to be scientifically inaccessible yet not epiphenomenal? If it is epiphenomenal, is there any way for science to ever get at it? Or even philosophy in any conclusive manner?

Featured image credit

https://selfawarepatterns.com/2024/09/15/experience-and-behavior/

#conscioiusness #phenomenalConsciousness #Philosophy #PhilosophyOfMind

File:Desmo-boden (cropped).jpg - Wikipedia

In the last thread, someone asked what exactly is it about consciousness that illusionists say is illusory?

One quick answer is that for illusionists, the properties people see in experience that incline us to think that consciousness is a metaphysically hard problem, are what’s illusory. In weak illusionism, the properties aren’t what they seem. In the strong version, which is usually what “illusionism” refers to, they don’t exist at all. But what exactly are these properties?

I’m a functionalist, someone who sees conscious experiences, and mental states overall, as more about what they do, the causal roles they play, than about any particular substance or constitution. It’s a view that I think provides a necessary explanatory layer between the mental and the physical, and so sees no barrier in principle to a full understanding of the relationship between them.

The usual argument against functionalism is that it doesn’t seem to account for qualia, the properties of phenomenal consciousness, the “what it’s like” nature of subjective experience, such as the redness of a red apple or the painfulness of toothache. Most functionalists, if they use the term, argue that qualia can be described functionally, such as pain being an automatic evaluation of a problem with a part of the body.

However philosophers have a number of thought experiments which claim to show that qualia and physics, including functionality, can be separated. This is where the illusionists come in. They argue qualia don’t exist, that the illusion is our impression that they do.

But that raises the question. What exactly are qualia? I gave the standard definition above, but it seems inadequate to settle this debate. The SEP article on qualia discusses four different versions, the simplest of which might be compatible with functionalism, but others that aren’t.

Daniel Dennett, in his 1988 Quining Qualia paper, a famous attack on the concept of qualia, provides the illusionist understanding, by noting four attributes commonly assigned to them. Summed up in the qualia Wikipedia article, they are:

  • ineffable – they cannot be communicated, or apprehended by any means other than direct experience.
  • intrinsic – they are non-relational properties, which do not change depending on the experience’s relation to other things.
  • private – all interpersonal comparisons of qualia are systematically impossible.
  • directly or immediately apprehensible by consciousness – to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.
  • Dennett’s description of qualia is often decried as a strawman, something he constructs to easily knock down. However, we only have to look at the most popular qualia thought experiments to see these attributes confirmed. For example, consider Frank Jackson’s knowledge argument as described through the Mary’s Room thought experiment.

    Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black-and-white room via a black-and-white television monitor. She specializes in the neurophysiology of vision and acquires all the physical information there is to obtain about what goes on when we see ripe tomatoes or the sky and use terms like “red”, “blue”, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence “The sky is blue.” What happens when Mary is released from her black-and-white room or is given a color television monitor? Does she learn anything new or not? Jackson claims that she does.

    Within the assumptions of this scenario, why is Mary unable to acquire the information she can only learn by having the experience? She supposedly can’t read descriptions of it, because no one can provide that, matching Dennett’s ineffable attribute. She can’t conduct experiments to detect it, because it’s scientifically inaccessible, meeting the private attribute. And Jackson argues that qualia are epiphenomenal, which seems to meet Dennett’s intrinsic attribute.

    The same attributes are implied with the inverted spectrum concept, the idea that there’s no way to know if my experience of red looks like yours of green and vice versa. The fact that we seem unable to describe our experiences of color to each other, that they’re ineffable, private, and intrinsic, is what gives this scenario life. Likewise, the absent qualia / zombie argument, the idea of a being physically or behaviorally equivalent to a conscious one, but not itself conscious, only works if there’s no way to observe or deduce whether qualia are present.

    And yet, in all these scenarios, the subject themselves still has first person access to these phenomenal properties. For that to be possible, for that access not to be prevented by the other attributes, it has to be special in some way, according to David Chalmers, in some non-causal manner, which gives us Dennett’s directly apprehensible property.

    And what is it that makes Chalmers’ hard problem of consciousness metaphysically hard, if not these attributes? Remove them, and the thought experiments and metaphysical mysteries seem to disappear.

    So the advocates of these thought experiments and illusionists seem to agree on what qualia are. They just disagree on whether they’re real. Functionalists and other physicalists, if they use terms like “qualia” or “phenomenal properties,” are referring to a concept with less theoretical commitments. Do Dennett’s attributes show up in the more reserved versions? As Dennett himself covers in the last section of his Quining Qualia paper, it becomes a matter of “in principle” vs “in practice”.

    For ineffability, no one thinks describing experiences like the redness of red in a functional manner is obvious or easy, although it can be done to at least some extent, starting with the distinctiveness and high saliency of redness. For many experiences, the phrase, “a picture is worth a thousand words,” comes to mind. It might involve so much effort that it’s ineffable in practice, even if not in principle.

    And there are limitations on our access to how these experiences are constructed. They are cognitively impenetrable, for the simple reason that it was never adaptive for our ancestors to be able to access the early processing, such as all the underlying associations and affects red stimuli trigger, but which figure in what red experience feels like. Which makes a full description of the experience impossible with only introspective information.

    Mental content, until recently, was private due to technological limitations and our lack of knowledge about the brain. It still effectively is in virtually all cases. But with the progress in brain scanning technologies, we’re seeing the first cracks in this attribute. We still have a long way to go, but even though it’s early days, the idea that mental content is in a separate realm and utterly inaccessible seems less defensible with each passing year.

    Without absolute ineffability or privacy, it’s not necessary to bring in direct apprehension. Which isn’t to say that we don’t have privileged internal access in practice, but it’s similar to the type of access the processors in the device you’re using right now have to read and write memory that aren’t easily observable from the outside.

    And then there’s intrinsicality. Achieving functional descriptions of conscious experience typically requires looking at the upstream causes and downstream effects of what we think of as the experience. Intrinsicality assumes that there’s still something in between, something that remains with intrinsic properties, something distinct from the causal chain, somewhere where the prior causes culminate in the presentation, and from which the downstream effects flow, with some aspects still conceivably epiphenomenal. The functional shift here is to regard the experience as the whole causal chain, a more plausible stance in a massively parallel system with no central control point.

    Clarifying these attributes as difficulties in practice, rather than absolute limitations in principle, both explains our impressions of them, and transforms conscious experience from an intractable metaphysical problem to a series of scientific ones.

    This is one of the reasons I used to resist the illusionist label, and still prefer the functionalist one. The difference doesn’t seem that vast (a point David Lewis made in 1995), and mostly seems to amount to a lack of nuance in our initial understanding, rather than some deep unavoidable species-wide misperception.

    And yet for a significant portion of the population, the strong intuition is that a functional description, while explaining behavior, still leaves out something important for experience. And here we run into an intuition clash. For someone convinced that an ineffable metaphysically private aspect remains, it doesn’t seem like something science can demonstrate is or isn’t there. It becomes an extra assumption some people hold and others don’t.

    Which seems to leave us in the strange place where the two views are empirically identical, and the debate a purely philosophical one.

    Unless of course I’m missing something. What do you think? Are functionalists overreaching for a non-gap explanation? Are there fact-of-the-matter differences between illusionism and functionalism I’m overlooking? And are there ways to demonstrate the reality or non-reality of ineffable private qualities?

    Featured image credit

    https://selfawarepatterns.com/2024/08/31/illusionism-and-functionalism/

    #Consciousness #functionalism #illusionism #phenomenalConsciousness #Philosophy #PhilosophyOfMind #Qualia

    Qualia (Stanford Encyclopedia of Philosophy/Fall 2021 Edition)