After a few months of working with Dr. Haikonen on my thesis, I’ve come to realize that a previous post I made about iCub’s phenomenal experiences is incorrect and therefore needs an update. Before I dive into that, however, it’s important for me to state that we ought to be looking at philosophy like programming: bugs are going to arise as people continue to work with new ideas. I love debugging though, so the thought of constantly having to go back to correct myself isn’t all that daunting. It’s about the journey, not the destination, as my partner likes to say.
I stated that “technically, iCub already has phenomenal consciousness and its own type of qualia” but given what Haikonen states in the latest edition of his book, this is not correct. Qualia consist of sensory information generated from physical neurons interacting with elements of the environment, and because iCub relies on sensors which create digital representations of physical properties, these aren’t truly phenomenal experiences. In biological creatures, sensory information is self-explanatory in that they require no further interpretation (Haikonen 7); heat generating sensations of pain indicates the presence of a stimulus to be avoided, as demonstrated by unconscious reflexes. The fact that ‘heat’ does not require further interpretation allows one to mitigate its effects on living cells rather quickly, perhaps avoiding serious damage like a burn altogether. While it might look like iCub feels pain, it’s actually a simile generated by computer code that happens to mimic the actions of animals and humans. Without a human stipulating how heat → flinching, iCub would not respond as such because its brain controls its body, rather than the other way around.
As I stated in the previous post, Sartre outlines how being-for-itself arises from a being-in-itself through recursive analysis, provided the neural hardware can support this cognitive action. Because iCub does not originate as a being-in-itself like living organisms, but as a fancy computer, the ontological foundation for phenomenal experiences or qualia is absent. iCub doesn’t care about anything, even itself, as it has been designed to produce behaviours for some end goal, like stacking boxes or replying to human speech. In biology, the end goal is continued survival and reproduction, where behaviours aim to further this outcome through reflexes and sophisticated cognitive abilities. The brain-body relationship in iCub is backwards, as the brain is designed by humans for the purposes of governing the robot body, rather than the body creating signals that the nervous system uses for protecting itself as an autonomous agent. In this way, organisms “care about” what happens to them, unlike iCub, as ripping off its arm doesn’t generate a reaction unless it were to be programmed that way.
In sum, the signals passed around iCub’s “nervous system” exist as binary representations of real-world properties as conceptualized by human programmers. This degree of abstraction disqualifies these “experiences” from being labelled as ‘qualia’ given that they do not adhere to principles identified within biology. The only way an AI can be phenomenally conscious is when it has the means to generate its own internal representations based on an analogous transduction process as seen in biological agents (Haikonen 10–11).
Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), https://doi.org/10.1142/11404.