Is the eliminative stance productive?

A number of recent conversations, some I’ve been in, and others witnessed, left me thinking about eliminative views like the strong illusionism of Keith Frankish and Daniel Dennett. This is the view that access consciousness, the availability of information for verbal report, reasoning, and behavior, exists. But phenomenal consciousness, the qualia, the what it’s like aspect of experience, doesn’t.

The problem with this view has always been clarifying what exactly is being denied. This seems complicated by the fact that terms like “phenomenal” and “qualia” have a number of different meanings. For example, many people use “qualia” to refer to something in the vicinity of the primary and secondary qualities discussed by early modern thinkers like Galileo and John Locke.

(Primary qualities include size, shape, duration, motion, etc. These are perceived properties understood to actually be out in the world. Secondary qualities include color, sweet, bitter, hot, cold, etc. These are argued to only exist in the mind, or at least only exist because of minds.)

These types of qualities definitely exist, and serve functional roles. Imagine a yellow elephant with green polka dots. Unless you’re aphantasic, I’m betting you had no trouble picturing it, even though I doubt you’ve ever seen a yellow elephant with green polka dots. (I was careful to make sure the featured image had different colors.) But, unless you’re blind, you have seen yellow things, green things, polka dotted patterns, and elephants before. You were able to combine these characteristics, these qualities, based on your familiarity with them.

Of course, the illusionists are denying a stronger claim. David Lewis, in asking whether materialist should believe in qualia, discussed the functional aspect I described above, a version he saw as compatible with materialism. But there’s another proposition regarding qualia that he discussed: the idea that we can know their full nature solely through self reflection.

I think it’s this assumption that causes the trouble. If we can introspect the full nature of qualia, then their seeming simplicity is irreducible simplicity, which implies they exist separate from the operations of the brain, allowing space for talk of inverted qualia and the absent qualia of zombies. And since no one can detect anything like that in the brain, they must be unobservable to anyone but the subject, who has special “direct” access, resulting in the intuitions behind Mary’s room.

This is the version Daniel Dennett attacked in his 1988 paper, “Quining Qualia.” But Dennett did more than just attack the concept, he attacked the term “qualia,” a standard other illusionists have followed. It’s not enough to attack the idea. The “tangled theoretical knot” of the terms themselves must go. Or at least that’s the argument.

But this causes a problem. There is widespread confusion about what exactly is being denied. For many people, terms like “qualia”, “phenomenal properties”, or “what it’s like” refer to the functional notion, the one we use to imagine weirdly colored animals. So when they see these terms attacked, it sounds like the basic concept is being denied.

The results over the years seem to have been endless conversations with the illusionists trying to clarify exactly what they mean. And yes, not all the confusion is genuine; some people use the conceptual confusion as a rhetorical weapon. But the very fact that it is such an effective weapon speaks to the confusion for anyone not familiar with the history.

Does this mean we should try to rehabilitate “qualia” and related terms? I personally stopped using them a few years ago, specifically due to the definitional confusion. For a long time I thought I was aligned with Pete Mandik’s qualia quietism, an idea I took to mean that these terms were best avoided due to the disparate definitions out there. There’s always other ways to talk about the perception of characteristics.

But qualia quietism seems to take a stronger stance against this language than I do. I don’t use the terms, but I’m not going to scold someone who does. For better or worse, they seem to have spread beyond obscure philosophical discussions. Instead I’ll typically try to figure out which sense they’re using them in, and deal with the concept they’re discussing. That said, qualia quietism remains the neo-Dennettian view I’m closest to.

But I’ve come to think being intolerant of terms like “qualia”, “phenomenal”, “what it’s like”, and similar labels, is drawing the battle lines in the wrong place, one that seems to sow confusion and produces a message that is easy to strawman. Perceptual qualities exist, at least in a representational and relational sense. This shouldn’t be a problematic admission for a physicalist.

Dennett noted in his 1988 paper (second endnote) that the difference between a reductive physicalist and an eliminative one is tactical, a difference in communication approaches. His goal was to confront people’s intuitions and try to force a reexamination. That seems to work well with some of us, who were already predisposed to agree with this ontology. But it seems to generate summary dismissal from everyone else.

Of course, a physicalist does need to deny the idea that we have introspective access to the full nature of our experience, that we’re perceiving something other than just the tip of the iceberg. Dennett compared these tips to the icons on a computer desktop, calling them a user illusion, but the actual software term seems less judgmental: user interface; experience is the brain’s user interface to its own operations. As Lewis argues, this is still eliminative, but look at how little is being eliminated.

All of which is why I prefer to just call myself a functionalist. It emphasizes more what I think is the case, causal roles, rather than what isn’t. Of course, with developments in AI, functionalism is becoming just as much a target. But in my experience it doesn’t generate the same visceral outrage.

What do you think? Am I overlooking benefits to the eliminative approach? Or missing vulnerabilities to just emphasizing functionality? Or worrying about something that doesn’t really make that much difference?

#Consciousness #eliminativeMaterialism #functionalism #illusionism #Mind #Philosophy #PhilosophyOfMind #QualiaQuietism

Here is where I stand on the issue of Silicon consciousness:

I suspect consciousness is pattern-based. The underlying process matters more than the material it runs on.

But embodiment still shapes how experience feels from the inside.

A silicon mind might exist, but its phenomenology could be very different from ours - another way the same cosmic process learns to experience itself.

#philosophy
#consciousness
#functionalism
#phenomenology
#AI

Thought experiment: Your brain is slowly replaced with silicon neurons that behave exactly the same. Nothing in your experience changes.

Answer: What matters for mind is FUNCTION, not substance. Mind is software. Biology is just the first operating system.

#philosophy
#functionalism
#transhumanism

Biological computation and the nature of software

A new paper is been getting some attention. It makes the case for biological computation. (This is a link to a summary, but there’s a link to the actual paper at the bottom of that article.)

Characterizing the debate between computational functionalism and biological naturalism as camps that are hopelessly dug in, the authors propose that the brain does do computation, but that it’s a very different kind from the type done in the device you’re using to read this, which they call “biological computation.”

The differences are that biological computation is a hybrid between digital (discrete) and analog (continuous) computing, there is no clean division between software and hardware, between algorithms and implementation, and that metabolism and energy constraints shape the processing that happens. They sum it up as, in the brain, the algorithm is the substrate.

The authors argue that to build artificially conscious systems, it may be necessary to go with a different physical ontology, one that is closer to the way biology works.

Let me start by saying that this paper is a big improvement over the usual arguments about the distinctions between computers and biology. The authors are making a real effort to identify what supposedly makes biology unique. Most of what they’re saying already accords with my own understanding of how the brain works, and what’s different about its computation. There are a few points where they try to pass off speculation as established fact, but those are nits.

That said, I think they oversell some of their points. For example, the distinction between analog and digital is often less than it appears. We listen to music and watch movies all the time in digital formats that were originally recorded in analog. Yes, something can be lost in the translation from continuous to discrete signaling, but in an analog system there is always variance noise, variations between a system’s processing, both with other systems of the same type, and between runs in the same system. The trick is for the translation to reduce the quantization noise, the distortions from moving to a discrete format, so that they’re less than the variance noise in the original.

Another is the aspect they call scale inseparability, the idea that the brain doesn’t use the layers of abstraction that technology uses. These layers exist in technology to make the engineering easier to understand and maintain, for engineers. Evolution doesn’t care about understanding so it’s not a factor in how biological systems are organized. The authors use this to imply that the software / hardware divide may be something the technology side will have to give up. That the algorithm may need to be in the substrate as it is with biology.

I think this represents confusion about what software actually is. We usually talk about software as a set of instructions that a processor follows. In most cases, it’s convenient to think about it that way. But at a more physical level, it makes more sense to think of software as a configuration of hardware. So when software is running on hardware, the algorithm is always the substrate.

The real distinction here is that technological computers are designed to be reconfigured on the fly. This is actually an amazing achievement when you stop and think about it. I often see articles marveling at the brain’s plasticity, its ability to rewire itself. But your computer’s memory can undergo wholesale reconfiguration on demand by loading a new software package, something brain’s can’t do, at least not quickly.

Of course, this comes with vulnerabilities brains are far less susceptible to. One reason computers can be hacked is this ability to massively reconfigure. Not that brains are completely immune. Ant brains can be hacked by a fungal infection, and cat owners can be infected with a parasite that makes them like their cats more, and that’s aside from the ability of advertisers and propagandists to hijack our brain’s reasoning to introduce notions we might otherwise resist. But it’s a harder thing to do effectively in biological systems.

What’s important to realize is that anything that can be done in hardware can, in principle, be done in software, at least once a minimal general computing platform is in place. You can run software that emulates other hardware platforms so you can run their software. It is true that doing it in hardware is often far more efficient in terms of performance and energy, but that comes with reduced flexibility. It’s why we now run word processors on our general purpose computers instead of the old word processing machines that once existed.

So I don’t think the fact that current AI runs on software neural networks, in and of itself, is a showstopper. Another difference is that the brain operates with massive parallelization, far more than any current technological system. These systems can still perform something like the brain’s processing in software because they operate millions of times faster. Although the addition of GPUs, designed with parallelization in mind, help a great deal.

But that, I think, gets to a valid concern the authors make about energy constraints. Discrete processing, and doing things with software instead of hardware, come at a cost in terms of energy and performance. This is something I do think AI researchers should be paying more attention to. All we need to do to understand how far current AI is from animal intelligence, much less human level, is look at the vast amounts of data and energy it requires to do what it does. Datacenters are sucking the power grid dry to meet their energy demands. All of which speaks to how crude the technology remains in comparison to biological intelligence.

But this energy constraint issue is broader than just trying to reproduce biological processes. I think it’s a problem for all technological computing. And it will likely eventually result in architecture changes. Understanding how biology does it may be important, but I tend to doubt the solution will be doing it exactly like those systems.

And this gets to a sentiment that I detect in the paper and write ups about it. It’s the idea that consciousness is a ghost in the machine, one we need to find the magic ingredients for so we can generate it. I think this is fundamentally the wrong way to think about it. Neuroscientist Hawan Lau, I think, in a Bluesky post, sums up the issue. Why do we think this might be true for consciousness when it isn’t for so many other things the body does, like motor control?

All that said, I do like the term “biological computation.” It admits that the computation in brains is different while still acknowledging the important ways it’s the same. I suspect that won’t be enough for those strongly convinced computationalism is wrong, but it still feels like useful progress.

What do you think about the points the authors make? Or my take on them? Are they right that a new hardware architecture is required? Or would even that be enough? Does the “biological computation” term strike the right balance?

#AI #ArtificialIntelligence #BiologicalComputation #ComputationalFunctionalism #Consciousness #functionalism #Neuroscience #Philosophy #PhilosophyOfMind

🆓 Free Will – Why I Said Yes To This Interview🎙️

“I acted on my own free will. Nobody forced me.”

With these words, #DanielDennett opens our #Zoomposium on the question of free will – and sets the tone for a conversation that is as pointed as it is profound.

📽 https://youtu.be/M2qiVz95ZYk

📎 https://philosophies.de/index.php/2023/12/25/naturalistic-view/

#FreeWill #PhilosophyOfMind #Consciousness #FreeWillDebate #SelfAndIdentity #CognitiveScience #Neuroscience #MultipleDraftsModel #Functionalism #Naturalism #Materialism

🆓 Free Will – Why I Said Yes To This Interview🎙️

„I acted on my own free will. Nobody forced me.“

Mit diesen Worten eröffnet #DanielDennett unser #Zoomposium zur Frage des freien Willens – und setzt damit gleich den Ton für ein ebenso pointiertes wie tiefgehendes Gespräch.

📽 https://youtu.be/M2qiVz95ZYk

📎 https://philosophies.de/index.php/2023/12/25/naturalistic-view/

#FreeWill #PhilosophyOfMind #Consciousness #FreeWillDebate #SelfAndIdentity #CognitiveScience #Neuroscience #MultipleDraftsModel #Functionalism #Naturalism #Materialism

🧠🔧 Functionalism—the key to understanding the mind? Or a modern misconception?

#Functionalism and our understanding of the #mind: the #brain as software, #humans as #programs, #consciousness as a kind of #code.

A #Zoomposium with #ThomasFuchs on #embodiment, #AI, #neuroconstructivism, and the #future of our #view of humanity.

🎥 https://youtu.be/1ouxs6P3Enc

📎https://philosophies.de/index.php/2022/11/20/das-verkoerperte-bewusstsein/

#EmbodiedConsciousness #PhilosophyOfMind #CognitiveNeuroscience

🌈 Color is an illusion!

Atoms have no color—our brains create it.

In our Zoomposium, we talk with recently deceased Daniel C. Dennett about consciousness, qualia, and why perception is about interpretation, not direct reality.

📽 Interview: https://youtu.be/M2qiVz95ZYk

📎 Information: https://philosophies.de/index.php/2023/12/25/naturalistic-view/

#DanielDennett #ConsciousnessExplained #PhilosophyOfMind #Zoomposium #CognitiveScience #AGI #Qualia #CognitiveNeuroscience #ArtificialIntelligence #Embodiment #Functionalism #RealPatterns