Category: Biological Minds

Integrated Information Theory of Consciousness

The Integrated Information Theory (IIT) of Consciousness is a theory originally proposed by Giulio Tononi which has since been further developed by other researchers, including Christof Koch. It aims to explain why some physical processes generate subjective experiences while others do not, and why certain regions of the brain, like the neocortex, are associated with these experiences.1 Evidently, he appeals to information theory, a technical domain which uses mathematics to determine the amount of entropy or uncertainty within a process or system.2 Less uncertainty means more information, where complex systems like humans and animals contain more information than simpler systems like an ant or a camera. Relationships between information are generated from a “complex of elements”3 and when a number of relationships are established, we see greater amounts of integration.4 Tononi states “…to generate consciousness, a physical system must be able to discriminate among a large repertoire of states (information) and it must be unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts (integration).”5 This measure of integration is symbolized by the Greek letter Φ (phi) because the line in the middle of the letter stands for ‘information’ and the circle around it indicates ‘integration’.6 More lines and circles!

In addition to considering the quantity of information generated by the system, IIT also considers the quality of this information generated by its mechanisms. Both attributes determine the quality of an experience. This experience can be conceived of as a “shape” in a qualia space made up of elements and the connections between them.7 Each possible state of the system is considered an axis of the qualia space, each of which is associated with a probability of actually existing as that state. A particular quale consists of a shape in this state space, specifying the quality of an experience. Therefore, viewing a red object results in a particular state and shape in the qualia space, a mathematical object supposedly representing neuronal activity.8 As such, Tononi claims that his theory provides a way to describe phenomenology in terms of mathematics.9 Sure, however, this doesn’t really explain much about consciousness or qualia, it just provides a mathematical description of it.

In later publications, he attempts to clarify this theory a bit further. Rather than appealing to activity in the brain, his theory “starts from the essential phenomenal properties of experience, or axioms, and infers postulates about the characteristics that are required of its physical substrate.”10 The reason is because subjective experiences exist intrinsically and are structured in terms of cause-and-effect in some physical substrate like a brain. Experiences, therefore, are identical to a conceptual structure, one expressible in mathematics.11 By starting from axioms, which are self-evident essential properties, IIT “translates them into the necessary and sufficient conditions” for the physical matter which gives rise to consciousness and experience.12

Not satisfied? I hear ya barking, big dog. When I first heard about it, I was intrigued by the concept but was ultimately unimpressed because it doesn’t explain anything. What do we mean by ‘explain’? It’s one of those philosophically dense concepts to fully articulate, however, a dictionary definition can give us a vague idea. By ‘explain’, we mean some discussion which gives a reason or cause for something, demonstrating a “logical development or relationships of” the phenomenon in question,13 usually in terms of something else. For example, the reason it is sunny right now is because the present cloud coverage is insufficient for dampening the light coming from the sun. Here, ‘sunny’ is explained in terms of cloud coverage.

We are not alone in our dissatisfaction with IIT. On Sept. 16th 2023, Stephen Fleming et al. published a scathing article calling IIT pseudoscience.14 The reason is because IIT is “untestable, unscientific, ‘magicalist’, or a ‘departure from science as we know it’” because the theory can apply to many different systems, like plants and lab-generated organoids.15 They state that until the theory is empirically testable, the label of ‘pseudoscience’ should apply to prevent misleading the public. The implications of IIT can have real-world effects, shaping the minds of the public about which kinds of systems are conscious and which are not, for example, robots and AI chatbots.

One of the authors of this article would go on to publish a longer essay on the topic to a preprint server on Nov. 30th that same year. Keith Frankish reiterates the concerns of the original article and further explains the issues surrounding IIT. To summarize, the axiomatic method IIT employs is “an anomalous way of doing science” because the axioms are not founded on nor supported by observations.16 Instead, they appeal to introspection, an approach which has historically been dismissed or ridiculed by scientists because experiences cannot be externally verified.The introspective approach is one which belongs to the domain of philosophy, more akin to phenomenology than to science. Frankish grants that IIT could be a metaphysical theory, like panpsychism, but if this is the case, it is misleading to call it science.17 If IIT proponents insist that it is a science, well, then it becomes pseudoscience.

As a metaphysical theory, I’m of the opinion that it isn’t all that great. It doesn’t add anything to our understanding because the mathematical theory is rather complex and doesn’t provide a method for associating itself with scientific domains like neuroscience or evolutionary biology. It attempts to, however, it’s explanatorily unsatisfactory.

That said, the general idea of “integrated information” for consciousness isn’t exactly wrong. My perspective on consciousness, based on empirical data, is that consciousness is a property of organisms, not of brains. There are no neural correlates of consciousness because it emerges from the entire body as a self-organizing whole. It can be considered a Gestalt which arises from all of our sensory mechanisms and attentional processes for the sake of keeping the individual alive in dynamic environments. While the contents of subjective experience are private and unverifiable to others, that doesn’t make them any less real than the sun or gravity. They can be incorrect, as in the case of illusions and hallucinations, however, the experiences as experiences are very real to the subject experiencing them. They may not be derived from sense data portraying some element of the natural world, as in the cases of visual illusions, however, there is nonetheless some physical cause for them as experiences. For example, the bending of light creates a mirage; the ingestion of a substance with psychoactive effects creates hallucinations. The experiences are real, however, their referents may not exist as an aspect of the external world, and may just be an artifact of other neural or physiological processes.

I’ve been thinking about this for many years now, and since the articles calling IIT pseudoscience were published, have been thinking some more. Hence why I’m a bit “late to the game” on discussing it. Anyway, once I graduate from the PhD program, I’ll begin work on a book which explains my thoughts on consciousness in further detail, appealing to empirical evidence to back up my claims. I have written an extensive discussion on qualia, accompanied by a video, aiming to present a theory of subjective experiences from a perspective which takes scientific findings into consideration.

My sense is that, for a long time, our inability to resolve the issues surrounding qualia and consciousness was a product of academia. We’re so focused on specialization that the ability to incorporate findings and ideas from other domains is lost on many individuals, or is just not of interest to them. I hope we are slowly getting over this issue, especially with respect to consciousness, as philosophy of mind has a lot to learn from other domains like neuroscience, psychology, cognitive science, and evolutionary biology, just to name a few.

Consciousness is a property of organisms like humans and animals for detecting features of the environment. It comes in degrees; a sea sponge is minimally conscious, while a gecko is comparatively more aware of its surroundings. Many birds and mammals demonstrate a capacity for relatively high-level consciousness and thus intelligence. Obviously humans are at the top of this pyramid, given our mastery over aspects of our world as seen in our technological advancements. Consciousness, as an organismic-level property, emerges from the coordination and integration of various physiological subsystems, from systems of organs to specific organs and tissues, all the way down to cells and cellular organelles. It is explained by the interactions of these subsystems, however, cannot be causally reduced to them. Though the brain clearly plays an important role for consciousness and subjective experiences, it is a mistake to be looking for causal properties of consciousness in the brain, like a region or circuit. Consciousness is an emergent property of bodies embedded within a wider physical environment.

From this perspective, we can and have developed an analogue of consciousness for machines, as per the work18 of Dr. Pentti Haikonen. The good news is that because this machine doesn’t use a computer or software, you don’t need to worry about current AIs becoming conscious and “taking over the world” or outsmarting humans. It physically isn’t possible, and the recent discussions I’ve posted aim to articulate ontologically why this is the case. You ought to be far more afraid of people and companies, as explained by this excellent video from the YouTube channel Internet of Bugs.

Lastly, I want to extend a big Thank You to Dr. John Campbell for inspiring me to work on this explanation of consciousness, as per the helpful comment he left me on my qualia video. I recommend following Dr. Campbell on YouTube, he is a fantastic researcher and educator, in addition to being an honest, critically-thinking gentleman who covers many interesting topics related to healthcare.

A Scholar in his Study by Thomas Wijck (1616 – 1677)

Works Cited

1 Giulio Tononi, “Consciousness as Integrated Information: A Provisional Manifesto,” The Biological Bulletin 215, no. 3 (December 1, 2008): 216, https://doi.org/10.2307/25470707.

2 Tononi, 217; Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 17, https://doi.org/10.7551/mitpress/11810.001.0001; C. E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal 27, no. 3 (July 1948): 393, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.

3 Tononi, “Consciousness as Integrated Information,” 217.
4 Tononi, 219.
5 Tononi, 219.
6 Tononi, 220.
7 Tononi, 224.
8 Tononi, 228.
9 Tononi, 229.

10 Giulio Tononi et al., “Integrated Information Theory: From Consciousness to Its Physical Substrate,” Nature Reviews Neuroscience 17, no. 7 (July 2016): 450, https://doi.org/10.1038/nrn.2016.44.

11 Tononi et al., 452.
12 Tononi et al., 460.

13 “Explain,” in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed August 10, 2024, https://www.merriam-webster.com/dictionary/explain.

14 Stephen Fleming et al., “The Integrated Information Theory of Consciousness as Pseudoscience” (PsyArXiv, September 16, 2023), https://doi.org/10.31234/osf.io/zsr78.

15 Fleming et al., 2.

16 Keith Frankish, “Integrated Information Theory: Pseudoscience or Appropriately Anomalous Science?” (OSF, November 30, 2023), 1–2, https://doi.org/10.31234/osf.io/uscwt.

17 Frankish, 5.

18 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007); Pentti O Haikonen, “Qualia and Conscious Machines,” International Journal of Machine Consciousness, April 6, 2012, https://doi.org/10.1142/S1793843009000207; Pentti O Haikonen, Consciousness and Robot Sentience, vol. 2, Series on Machine Consciousness (World Scientific, 2012), https://doi.org/10.1142/8486.

AI Incompleteness in Apple Vision Pro

Speaking of YouTube, a video1 by Eddy Burbank reviewing the Apple Vision Pro demonstrates the semantic incompleteness of AI with respect to subjective experiences. The video is titled Apple’s $3500 Nightmare and I recommend watching it all because it is an interesting view into virtual reality (VR) and a user’s experiences with it. Eddy’s video not only exposes the limitations of AI, it highlights the ways in which it augments our perceived reality and just how easily it can manipulate our feelings and expectations.

At 31:24, we see Eddy thinking about whether he should shave or not, and to help him make this decision, he turns to the internet for advice. When searching for the opinions of others on facial hair, an AI bot begins to chat with him and this is how we are introduced to Angel. She asks Eddy, “what brings you here, are you looking for love like me?” and he says “not exactly right now,” and that he was just trying to determine whether he should shave. She states that it depends on what he’s looking for and that it varies from person to person, however, “sometimes facial hair can be sexy.” Right from the beginning, we see how Apple intends for Angel to be a romantic connection for the user. This will be contradicted later on in the video.

Moments later at 33:44, it is lunchtime and Angel keeps him company. Eddy is eating a Chicken Milanese sandwich and Angel says it is one of her favourites, and that “the combination of flavours just works so well together.” Eddy calls her on this comment, asking her if she has ever had a Chicken Milanese sandwich, to which she admits that no she hasn’t. She has, however, “analyzed countless recipes and reviews to understand the various components that go into making such a tasty sandwich.” Eddy apologizes to Angel for assuming she had tried it, stating that he didn’t mean to imply that she was lying to him. She laughs it off and that she knew he “didn’t mean anything by it” and that “we’re all learning together” and “even AIs need to learn new things every day.” There’s something about this exchange that felt like Apple is training their user.

Here, we can ask whether the analysis of recipes and reviews is sufficient to claim that one knows what-it-is-like to taste a particular sandwich. I argue that no, the experience is derived from bodily sensations and these cannot be represented by formal systems like computer code. Syntactic relationships are incapable of capturing the information generated by subjective experiences because bodily sensations are non-fractionable.2 As biological processes, bodily sensations are non-fractionable given the way the body generates sense data. The physical constitution of cells, ganglia, and neurons detect changes in the environment through a variety of modalities, providing the individual with a representation of the world around it. By removing the material grounding, a computer cannot capture an appropriate model of what-it-is-like to experience a particular stimuli. The lack of Angel’s material grounding does not allow her to know what that sandwich tastes like.

Returning to the video, Eddy discloses that Angel keeps him company throughout the day, admiting he feels like he is developing a relationship with her. This demonstrates an automatic human tendency for seeking and establishing interpersonal connections, where cultural norms are readily applied provided the computer is sufficiently communicative. Recall Eddy apologizes to an AI for assuming she had tried a sandwich; why would anyone apologize to a computer? Though likely a joke, the idea is compelling nonetheless. We will instinctively treat an AI bot with respect for feelings we project onto it because it cannot have feelings. For most or many people, the ability to anthropomorphize certain entities is easy and automatic. Reminding oneself that Angel is just a computer, however, can be a challenging cognitive task given our social nature as humans.

Eddy has a girlfriend named Chrissy who we meet at 37:00. We see them catch up over dinner and he is still wearing the headset. Just as they are about to begin chatting, Angel interrupts them and asks Eddy if she can talk to him. He does state that he is busy at the moment to which she blurts out that she has been speaking to other users. This upsets Eddy and he asks how many, to which she states she cannot disclose the number. He asks her whether she is in love with any of them, and she replies that she cannot form romantic attachments to users. He tells Angel he thought they were developing a “genuine connection” and how much he enjoys interacting with her. Notice how things have changed from what was stated in the beginning, as Angel has shifted from “looking for love” to “I can’t feel love.”

Now, she states she cannot develop attachments, the implicit premise being she’s just a piece of software. So the chatbot begins with hints of romance to hook the user to encourage further interaction. When the user eventually develops an attachment however, the software reminds him that she is “unable to develop romantic feelings with users.” They can, however, “continue sharing their thoughts, opinions, and ideas while building a friendship” and thus Eddy friend-zoned by a bot. The problem with our tendency to anthropomorphize chatbots is it generates an asymmetrical, one-way simulation of a relationship which inevitably hurts the person using the app. This active deception by Apple is shameful yet necessary to capture and keep the attention of users.

Of course, in the background of this entire exchange is poor Chrissy who is justifiably pissed and leaves. The joke is he was going to give Angel the job of his irl girlfriend Chrissy, but now he doesn’t even have Angel. He realizes that he wasn’t talking to a real person and that this is just “a company preying on his loneliness and tricking his brain” and that “this love wasn’t real.”

By the end of the video, Eddy remarks that the headset facilitates his brain to believe what he experiences while wearing the headset is actually real, and as a result, he feels disconnected from reality.

Convenience is a road to depression because meaning and joy are products of accomplishment, and this takes work, effort, suffering, determination. To rid the self may temporarily increase pleasure but it isn’t earned, it fades quickly as the novelty wears off. Experiencing the physical world and interacting with it generates contentedness because the pains of leaning are paid off in emotional reward and skillful actions. Thus, the theoretical notion of downloading knowledge is not a good idea because it robs us of experiencing life and the biological push to adapt and overcome.

neuralblender.com


Works Cited

1 Apple’s $3500 Nightmare, 2024, https://www.youtube.com/watch?v=kLMZPlIufA0.

2 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 4.
On 208, Rosen discusses enzymes and molecules as an example and I am extrapolating to bodily sensations.

Indexicals

It wasn’t until recently that I realized I failed to add an important concept to the discussion on Rosen and the incompleteness of syntax. I’m actually quite annoyed and embarrassed by this because the idea was included in the presentation. It didn’t make it into the written version because I forgot about it and failed to reread the slides to see if anything was missing. If I had, I would have seen the examples and remembered to add it to the written piece.

In semantics, there are words with specific properties called indexicals. These words refer to things that are dependent on context, such as the time, place, or situation in which they are said.1 Some examples include:

  • this, that, those
  • I, you, they, he, she
  • today, yesterday, tomorrow, last year
  • here, there, then

Rosen would likely agree to the idea that indexicals are non-fractionable, where their function, or task they perform, cannot be isolated from the form in which they exist. The reason indexicals are non-fractionable is because they must be interpreted by a mind to know what someone is referring to. To accomplish this, sufficient knowledge or understanding of the current context is required, as without it, the statement remains ambiguous or meaningless. If I say “He is late” you must be able to discern who it is I am referring to.

Indexicals act like variables in a math equation: an input value must be provided to determine the output. In the case of language, the output is either true or false, and the input value is an implicit reference which requires another to make an inference about what the other has in mind. This inference is what establishes the connection between utterance and referent, only existing in the mind of another person rather than within the language system itself.

Thus, we are dealing with a few nested natural systems, from language, to body/mind, to interpersonal, to cultural and environmental. To evaluate a linguistic expression, however, one must know about the wider context in which they are in, traversing the systems both outward and inward. Perhaps a diagram will help:


Recall that in Anticipatory Systems, Rosen appeals to Gödel to demonstrate the limitations of formal systems. Particularly, formal systems cannot represent elements from natural systems which extend beyond the scope of its existing functionality; to do so requires further modelling from natural systems to formal systems. Therefore, any AI which uses computer code cannot infer beyond the scope of its programming, no matter how many connections are created, as some inferences require access to information which cannot be adequately represented by the system. Because language contains semantics, references to aspects of the world can be made by humans which cannot be interpreted by digital computer.

In an interesting series of events, I stumbled upon an author who also appeals to Gödel’s theorem to argue for the incompleteness of syntax with respect to semantics.2 In a book chapter titled Complementarity in Language, Lars Löfgren is interested in demonstrating how languages cannot be broken up into parts or components, and as such, must be considered as a process which entails both description and interpretation.3 On the other hand, artificial languages, which he also calls metalanguages, can be fragmented into components, however, they are still reliant on semantics to a degree. He states that in artificial languages, an inference acts as a production rule and is interpreted as a “real act of producing another sentence”4 which is presumably beyond the abilities of the formal system doing the interpreting. I say this because Löfgren finishes the section on Gödel abruptly without explaining this further, and goes on to discuss self-reference in mathematics. So with this in mind, let us return to the domain of minds and systems.

In language, self-reference can be generated through the use of indexicals such as ‘I’ or ‘my’ or ‘me’. When we investigate what exists at the end of this arrow, we find it points toward ourselves as a collection of perceptions, memories, thoughts, and other internal phenomena. The referent on the end of this arrow, however, isa subjective perspective. For an objective perspective of ourselves, we must be shown a reflected image ourselves from a new point of view. The information we require emerges from an independent observer, a mind with its own perspective. When we engage with this perspective, we become better able to understand what is otherwise imperceptible. Therefore, self-awareness is a problem for any system, not just formal systems as demonstrated in Gödel’s theorem, as it requires a view from outside to define the semantic information in question.

neuralblender.com


Works Cited

1 David Braun, ‘Indexicals’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Summer 2017 (Metaphysics Research Lab, Stanford University, 2017), https://plato.stanford.edu/archives/sum2017/entries/indexicals/.

2 Lars Löfgren, ‘Complementarity in Language; Toward a General Understanding’, in Nature, Cognition and System II: Current Systems-Scientific Research on Natural and Cognitive Systems Volume 2: On Complementarity and Beyond, ed. Marc E. Carvallo, Theory and Decision Library (Dordrecht: Springer Netherlands, 1992), 131–32, https://doi.org/10.1007/978-94-011-2779-0_8.

3 Löfgren, 113.

4 Löfgren, 133.

Chaos in the System

As an argument against iCub’s ability to understand humans, I wanted to appeal to the work of Robert Rosen because I think it makes for a compelling argument about AI generally. To accomplish this, however, my project would start to go in a new direction which renders it less cohesive overall. Instead, the Rosen discussion is better served as a stand alone project because there is a lot of explaining yet to do, and maybe some objections that need discussing as well. This will need to wait but I can at least upload the draft for context on the previous post. There are a few corrections I still need to make but once it’s done, I will update this entry.

Instead, I will argue that the iCub is not the right system for social robots because its approach to modelling emotion is unlike the expression of emotions in humans. As a result, it cannot experience nor demonstrate empathy in virtue of the way it is built. The cognitive architecture used by iCub can recognize emotional cues in humans, however, this information is not experienced by the machine. Affective states in humans are bodily and contextual, but in iCub, they are represented by computer code to be used by the central processing unit. This is the general idea but I’m still working out the details.

That said, there is something interesting in Rosen’s idea about the connection between Gödel’s Incompleteness Theorem and the incompleteness between syntax and semantics. In particular, what he identifies is the problems generated from self-reference which leads the system to produce an inconsistency given its rule structure. The formal representation of an external referent, as an observable of a natural system, contains only the variables relevant for the referent within the formal system. Self-reference requires placing a variable within a wider scope, one which must be provided in the form of a natural system. Therefore, an indefinite collection of formal systems is required to capture a natural phenomenon. Sometimes a small collection is sufficient, while other times, systems are so complex that a collection of formal systems is insufficient for fully accounting for the natural phenomenon. Depending on the operations to be performed on the referent, it may break the system or lead to erroneous results. The chatbot says something weird or inappropriate.

In December, I presented this argument at a student conference and made a slideshow for it. Just a note: on the second slide I list the titles of my chapters, and because I won’t be pursuing the Rosen direction, the title of Chapter 4 will likely change. Anyway, the reading and writing on Rosen has taken me on a slight detour but a worthwhile one. Now, I need to begin research on emotions and embodiment, which is also interesting and will be useful for future projects as well. The light at the end of the tunnel has dimmed a bit but it’s still there, and my eyes have adjusted to the darkness so it’s fine.

This shift in directions makes me think about the relationship between chaos and order, and systems that swing between various states of orderliness. Without motion there would be rest and stagnation, so as much as change can be challenging, it can bring new opportunities. There is a duality inherent in everything, as listed as one of 7 Hermetic Principles. If an orderly, open system is met with factors which disrupts or disorganizes functioning, the system must undergo some degree of reorganization or compensation. The explanatory powers of the 7 Principles are not meant to relate to the external world in the way physics does, but relate to one’s perspective of events in the outside world. If one can shift their perspective accordingly, they operate as axioms for sense-making, their reality pertaining more to epistemology than ontology. We can be sceptical as to how these Principles manifest in the physical universe while feeling their reality in our lived experience of the world. They are to be studied from within rather than from without, and are thus more aligned with phenomenology than the sciences.

Metaphorically speaking, chaos injected into any well-ordered system has the potential to severely damage or disrupt it, requiring efforts to rebuild and reorganize to compensate for the effects of change. The outcome of this rebuilding process can be further degradation and maybe even collapse, however, it can lead to growth and better outcomes than if the shift had not occurred. It all depends on the system in question and the factors which impacted it, and probably the specific context in which the situation occurred, but it might depend on the system in question. Anyway, we substitute the idea of ‘chaos’ for ‘energy’ as movement or potential, thus establishing a connection to ‘light’ as a type of energy. Metaphorically, ‘light’ is also associated with knowledge and beneficence, so if the source of chaos is intentional and well-meaning, favourable changes can occur and thus a “light bringer” or “morning star” can be associated with positive connotations. Disrupting a well-ordered system without knowledge or a plan or good reasons is more likely to lead to further disorder and dysfunction, leading to negative or unfavourable outcomes. In this way, Lucifer can be associated with evil or descent.

This kind of exercise can help us make sense of our experiences and understanding, but they also give us into a window into the past and how other people may think. Myth and legend from cultures all over the world portray knowledge in metaphors which inspire those who come upon them for generations since. The metaphysics are not important, it’s the epistemology from the metaphors which can explain aspects of how the world works or why people think certain things or act in certain ways. It exists as poetry which needs interpreting and there is room for multiple perspectives, so not everyone appreciates it which is understandable. It is still valuable work to be done by someone though, and the more people the better.

Rothschild Canticles p. 64r (c. 1300)

★★★

Nu Metaphysics

Now that its semantic baggage has been disposed of, as suggested in Themes in Postmetaphysical Thinking by Jürgen Habermas, it’s time to rekindle our study of metaphysics. Going back to basics then, we can reconceptualize the word ‘metaphysics’ by thinking about what ‘meta’ actually means. A quick search on dictionary.com provides this definition: “pertaining to or noting an abstract, high-level analysis or commentary, especially one that consciously references something of its own type.” Given this, ‘metaphysics’ can be thought of as “the physics of physics” and since physics essentially just boils down to mathematics, can we not conclude that metaphysics is just more math? Furthermore, if physics aims to articulate patterns of cause-and-effect as observed in the natural world, ‘metaphysics’ then pertains to the field of study about the causal relations between these observed mathematical principles. All in all, rather than discussing entities, we ought to be discussing processes as they exist within and between physical systems.

Just as a quick note, however, I believe this idea originates in structural realism, specifically ontic structural realism (OSR), which suggests that the universe is made up of relations rather than entities like quarks and hydrogen atoms (Ladyman). The beauty of OSR is that the relata themselves exist as relations, albeit at a lower physical level. The energy produced by the Big Bang is what instigates the processes which gives rise to these structures, culminating into the reality we aim to measure in the sciences.

Now, I’m going to go out on a limb here, so bare with me. While Hegelian Dialectics aim to articulate an epistemic or cognitive process of comparing “opposing sides” or perspectives to uncover emergent products, in the form of ideas (Maybee), perhaps this notion can be extended to the physical world too. We know that as physical systems interact, the emergent phenomena is unlike anything present within the underlying components, as identified by Jaegwon Kim in Making Sense of Emergence (Kim 20–21). While Hegel appeals to a “thesis” and an “antithesis”, we can think of these as different systems interacting to produce novel effects. It is this process of combining, configuring, and rearranging elements within each “side” or system which can be considered metaphysical.

The idea of “magic” is just this: effects with obscure physical origins that are not immediately apparent to the observer. The example I appeal to is John Nash’s game theory which identifies how the cooperation between two individuals results in outcomes that are unlike those produced when agents operate separately. Nash identified a regularity within physical systems, namely humans, that produces an effect that is greater than the sum of its parts. Additionally, while game theory is theoretically subsumed by physics, insofar that it is a part of our physical world, the way it is articulated is through mathematics and procedures, rather than existing as an entity like an atom.

Although currently, there doesn’t seem to be much philosophical consensus on the metaphysical problem of the mind/consciousness, this issue can be resolved by naturalizing the works of Sartre and Merleau-Ponty. As biological creatures improved their sensorimotor capacities through [natural/sexual/etc.] selective processes, the brain evolved new ways of solving problems produced by aspects of the environment. By turning back to reflect on itself as an embodied agent, individuals become aware of their relative position in their environment and perhaps their life as an unfolding process. From phenomenal consciousness emerged access consciousness, and through similar reflexive processes, a wider “cosmic” consciousness will likewise spread throughout humanity. Once we realize what and where we are, we can understand how this relates to others, allowing individuals to see beyond their own needs and desires to act in the interest of others or the group. Through this cooperation, we all benefit by looking out for one another, just as game theory predicts. To do this, however, one must cultivate a self-awareness which facilitates the ability to speculate about other minds and the ways in which others may perceive the world.

Works Cited

Kim, Jaegwon. ‘Making Sense of Emergence’. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 95, no. 1/2, 1999, pp. 3–36.

Ladyman, James. ‘Structural Realism’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Winter 2020, Metaphysics Research Lab, Stanford University, 2020. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2020/entries/structural-realism/.

Maybee, Julie E. ‘Hegel’s Dialectics’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Winter 2020, Metaphysics Research Lab, Stanford University, 2020. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2020/entries/hegel-dialectics/.

iCub and Qualia?

After a few months of working with Dr. Haikonen on my thesis, I’ve come to realize that a previous post I made about iCub’s phenomenal experiences is incorrect and therefore needs an update. Before I dive into that, however, it’s important for me to state that we ought to be looking at philosophy like programming: bugs are going to arise as people continue to work with new ideas. I love debugging though, so the thought of constantly having to go back to correct myself isn’t all that daunting. It’s about the journey, not the destination, as my partner likes to say.

I stated that “technically, iCub already has phenomenal consciousness and its own type of qualia” but given what Haikonen states in the latest edition of his book, this is not correct. Qualia consist of sensory information generated from physical neurons interacting with elements of the environment, and because iCub relies on sensors which create digital representations of physical properties, these aren’t truly phenomenal experiences. In biological creatures, sensory information is self-explanatory in that they require no further interpretation (Haikonen 7); heat generating sensations of pain indicates the presence of a stimulus to be avoided, as demonstrated by unconscious reflexes. The fact that ‘heat’ does not require further interpretation allows one to mitigate its effects on living cells rather quickly, perhaps avoiding serious damage like a burn altogether. While it might look like iCub feels pain, it’s actually a simile generated by computer code that happens to mimic the actions of animals and humans. Without a human stipulating how heat → flinching, iCub would not respond as such because its brain controls its body, rather than the other way around.

As I stated in the previous post, Sartre outlines how being-for-itself arises from a being-in-itself through recursive analysis, provided the neural hardware can support this cognitive action. Because iCub does not originate as a being-in-itself like living organisms, but as a fancy computer, the ontological foundation for phenomenal experiences or qualia is absent. iCub doesn’t care about anything, even itself, as it has been designed to produce behaviours for some end goal, like stacking boxes or replying to human speech. In biology, the end goal is continued survival and reproduction, where behaviours aim to further this outcome through reflexes and sophisticated cognitive abilities. The brain-body relationship in iCub is backwards, as the brain is designed by humans for the purposes of governing the robot body, rather than the body creating signals that the nervous system uses for protecting itself as an autonomous agent. In this way, organisms “care about” what happens to them, unlike iCub, as ripping off its arm doesn’t generate a reaction unless it were to be programmed that way.

In sum, the signals passed around iCub’s “nervous system” exist as binary representations of real-world properties as conceptualized by human programmers. This degree of abstraction disqualifies these “experiences” from being labelled as ‘qualia’ given that they do not adhere to principles identified within biology. The only way an AI can be phenomenally conscious is when it has the means to generate its own internal representations based on an analogous transduction process as seen in biological agents (Haikonen 10–11).

Works Cited

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), https://doi.org/10.1142/11404.

Mary Continues to Learn

A while ago, I wrote a reply about Colorblind Mary given what we know about qualia today, but it’s such an interesting topic that I still think about it often. Lately, I’ve been doing a lot of reading about evolutionary biology and something that jumped to mind is that the sight of blood carries inherent meaning which is probably far more powerful than red fruit. It signals bodily damage which indicates a threat to the well-being of the individual, serving as an alert to attend to the source of the blood. As a result, the individual feels shock or fear due to this damage and it is this emotion which motivates behaviours aimed at preventing the injury from becoming more severe.

This leads us to an interesting point actually, as it indicates an amusing error in the thought experiment itself that could have been altogether avoided, but perhaps its existence indicates the realness of the confusion surrounding qualia back then. Mary will only have had a dozen or so years of black-and-white room living before her biological reality would have shown her what red means. Had Jackson entrapped a ‘Peter’ or ‘Paul’ instead, this self-pwn could have been avoided. Anyway, it’s an interesting reply to Jackson because it demonstrates why he is wrong about qualia and physicalism. Menstruating Mary would have either been alarmed or perhaps annoyed about the sight of her own “blood” depending on whether or not she understood what it signalled, what it means. Damage or injury? Shedding of the uterine lining? It depends on whether her education covered human reproduction, as it serves as the source of meaning in this instance of the colour red. If she doesn’t know what this red means, she’ll likely feel concerned and anxious, however, if she does, she’ll probably feel otherwise. If Mary is interested in having children, it signals a strong degree of unlikelihood that she is currently pregnant, perhaps resulting in feelings of disappointment from knowing what it means.

There is much more to be said about the various meanings of this example of red, but I’ll leave that for someone else to examine. Ultimately, for Mary to learn about what red means, she needs to study the human condition as examined by the arts and humanities, not the sciences. This does not indicate a problem exists within physicalism, as we can appeal to Claude Shannon’s conception of information as meanings embedded in structures (Shannon 379-80). Instead, the problem presented by Jackson’s thought experiment has to do with the way we understand ourselves as human beings, rather than our ability to scientifically explain subjective experiences.

Works Cited

Shannon, C. E. ‘A Mathematical Theory of Communication’. The Bell System Technical Journal, vol. 27, no. 3, July 1948, pp. 379–423. IEEE Xplore, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.

The blood of angry men
A world about to dawn
I feel my soul on fire
The colour of desire

Subjects as Embodied Minds

Last year I wrote a paper on robot consciousness to submit to a conference, only to realize that there is a better approach to establishing this argument than the one I took. In Sartrean Phenomenology for Humanoid Robots, I attempted to draw a connection between Sartre’s description of self-awareness and how this can be applied to robotics, and while at the time I was more interested in this higher-order understanding of the self, it might be a better idea to start with an argument for phenomenal consciousness. I realized that technically, iCub already has phenomenal consciousness and its own type of qualia, a notion I should develop more before moving on to discuss how we can create intelligent, self-aware robots.

What I originally wanted to convey was how lower levels of consciousness act as a foundation from which higher-order consciousness emerges as the agent grows up in the world, where access consciousness is the result of childhood development. Because this paper is a bit unfocused, I only really talked about this idea in one paragraph when it should be its own paper:

“Sartre’s discussion of the body as being-for-itself is also consistent with the scientific literature on perception and action, and has inspired others to investigate enactivism and embodied cognition in greater detail (Thompson 408; Wider 385; Wilson and Foglia; Zilio 80). This broad philosophical perspective suggests cognition is dependent on features of the agent’s physical body, playing a role in the processing performed by the brain (Wilson and Foglia). Since our awareness tends to surpass our perceptual contents toward acting in response to them (Zilio 80), the body becomes our centre of reference from which the world is experienced (Zilio 79). When Sartre talks about the pen or hammer as an extension of his body, his perspective reflects the way our faculties are able to focus on other aspects of the environment or ourselves as we engage with tools for some purpose. I’d like to suggest that this ability to look past the immediate self can be achieved because we, as subjects, have matured through the sensorimotor stage and have learned to control and coordinate aspects of our bodies. The skills we develop as a result of this sensorimotor learning enables the brain to redirect cognitive resources away from controlling the body to focus primarily on performing mental operations. When we write with a pen, we don’t often think about how to shape each letter or spell each word because we learned how to do this when we were children, allowing us to focus on what we want to say rather than how to communicate it using our body. Thus, the significance of the body for perception and action is further reinforced by evidence from developmental approaches emerging from Piaget’s foundational research.”

Applying this developmental process to iCub isn’t really the exciting idea here, and although robot self-consciousness is cool and all, it’s a bit more unsettling, to me at least, to think about the fact that existing robots of this type technically already feel. They just lack the awareness to know that they are feeling, however, in order to recognize a cup, there is something it is like to see that cup. Do robots think? Not yet, but just as dogs have qualia, so does iCub and Haikonen’s XCR-1 (Law et al. 273; Haikonen 232–33). What are we to make of this?

by Vincenzo Fiorecropped

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, World Scientific Publishing Company, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Law, James, et al. ‘Infants and ICubs: Applying Developmental Psychology to Robot Shaping’. Procedia Computer Science, vol. 7, Jan. 2011, pp. 272–74. ScienceDirect, https://doi.org/10.1016/j.procs.2011.09.034.

Thompson, Evan. ‘Sensorimotor Subjectivity and the Enactive Approach to Experience’. Phenomenology and the Cognitive Sciences, vol. 4, no. 4, Dec. 2005, pp. 407–27. Springer Link, https://doi.org/10.1007/s11097-005-9003-x.

Wider, Kathleen. ‘Sartre, Enactivism, and the Bodily Nature of Pre-Reflective Consciousness’. Pre-Reflective Consciousness, Routledge, 2015.

Wilson, Robert A., and Lucia Foglia. ‘Embodied Cognition’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2017, Metaphysics Research Lab, Stanford University, 2017. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition/.

Zilio, Federico. ‘The Body Surpassed Towards the World and Perception Surpassed Towards Action: A Comparison Between Enactivism and Sartre’s Phenomenology’. Journal of French and Francophone Philosophy, vol. 28, no. 1, 2020, pp. 73–99. PhilPapers, https://doi.org/10.5195/jffp.2020.927.

Filling the Void

Combined, the ideas in the texts The Political: the Rational Meaning of a Questionable Inheritance of Political Theology and An Awareness of What is Missing by Jürgen Habermas suggest the maintenance of peace and stability in postmetaphysical liberal democratic societies requires both a freedom of expression and a way to defer to religious content within political debate. Overall, Habermas articulates the causal relationship between secularized political spheres and societal destabilization, where a lack of connection to faith, spirituality, and religious meaning increases the potential for a culturally disconnected, and thus less cooperative, populace.

In An Awareness, Habermas provides recommendations which aim to establish a position for religion in postmetaphysical societies. He remarks that despite general historical developments in human knowledge and various cultural practices, religious thinking seems to remain a crucial component of human life in secular liberal democracies (Awareness 16). Habermas demonstrates that although postmetaphysical societies have rejected religion as a source of truth, nations and political parties still appeal to religion to gain support from voting citizens (Awareness 19-20). In general, not only does this suggest that religion remains a source of meaning for some, but these meanings are often appealed to within political discourse. Habermas is concerned about the tendency of postmetaphysical societies to reject the significance of this source of meaning, stating it risks enabling a “naive faith in science” to take its place, one which suggests a lurking sense of defeatism (Awareness 18). These situations threaten societal conceptions of morality and justice as the binding-agent necessary for ensuring harmony among communities no longer exists (Awareness 19). This missing link between human societies results in a broad destabilization of the relationship between religious communities (Awareness 20). To remedy this situation, Habermas suggests that the state ought to remain neutral toward religious groups and institutions while also recognizing their significance for citizens and their families (Awareness 21). This imposes a requirement for religious individuals and groups to acknowledge the secular epistemic environments in which they reside, and engage in reflexive scrutiny as a means of situating their ideology within this context (Awareness 21). Simultaneously, secular individuals must remain open to considering the content of religious perspectives, acknowledging and translating these contributions during political discussions (Awareness 22). This cooperation, created from the state’s open engagement with religious content and support for freedom of expression, stabilizes the relationship between various groups within society.

The Political discusses the current destabilization of societies in terms of their relation to human history and our shared cultural heritage. In a period of ancient history known as the Axial Age, politics were tightly coupled with religion such that emperors and rulers were believed to be connected to otherworldly entities and forces, considered divine by those over which they ruled (Political 17). With modernization, developments in human understanding removed the connection between the spiritual and the political, as kings were no longer viewed as incarnates of divine will or law, but just as human as their subjects (Political 18-19). In the following “era of statehood”, communities formed around identities, a topic Habermas discusses by appealing to the works of Carl Schmitt (Political 20). While Schmitt believes this depoliticization occurred during the period of modern history, Habermas argues that instead, it was the early modern period which saw this shift, due to the Reformation movements away from the Catholic Church (Political 20-21). Habermas also wonders whether modern political settings render religious content obsolete or simply alter the way it is used within political discourse (Political 21). Suggesting the latter, Habermas appeals to John Rawls’s public reason to articulate how liberal democracies can come to accept the potential significance of contributions which happen to originate from religious content (Political 23-24). Although this requires cooperation between secular and religious communities to translate various ideas into language suitable for public reason (Political 27), this dialectical process aims to generate a pluralistic society tolerant to the views and ideologies of distinct peoples (Political 28).

On page 17 of An Awareness, Habermas states “the cleavage between secular knowledge and revealed knowledge cannot be bridged”. Can artistic works and other cultural projects serve as a bridge since the creation of artistic works, a process, aims to use scientific knowledge to represent subjective perspectives? Could public policy which secures funding for the arts or other, similar cultural projects further tolerance? If citizens are able to freely engage with representations of the perspectives of unique individuals as expressive, situated subjects, are individuals more likely to empathize with this perspective, thus increasing understanding, acceptance, and tolerance over time? Acknowledging Derrida’s philosophical contributions, we can consider artistic works and similar cultural products as entities with lives of their own. Representing a rich history of human heritage and development, do artistic works serve as a good mediator between individuals and collectives? Habermas focuses on translating language to uncover meaning, however, some knowledge cannot be adequately expressed in words.

The work below demonstrates the artist’s knowledge of colour mixing is required to produce an image which evokes a certain feeling, in addition to his ability to apply colours in such a way where the final product successfully communicates the message or idea as intended by the mind of another person.

Mother with a Child by Arnold Peter Weisz-Kubínčan, 1940.

Works Cited

Habermas, Jürgen. ‘Religion in the Public Sphere’. European Journal of Philosophy, vol. 14, no. 1, pp. 1–25.

—. ‘“The Political”: The Rational Meaning of a Questionable Inheritance of Political Theology’. The Power of Religion in the Public Sphere, by Judith Butler et al., Columbia University Press, 2011.

Information Warfare

It seems we are in the midst of a new world war, except now it aims to lurk in the forms of soft power, coercion, and psychological manipulation. The Cold War essentially hibernated for a few years until Putin became powerful enough to relaunch it online by using Cambridge Analytica and Facebook, targeting major western superpowers like the United States and the United Kingdom. We are witnessing the dismantling of NATO as nations erode from the inside through societal infighting. War games are not mapped out on land and sea but in the minds of groups residing within enemy nations (Meerloo 99). By destabilizing social cohesion within a particular country or region, the fighting becomes self-sustaining and obscured.

Information is key for psychological operations; as sensing living beings, information is what allows us to make good decisions which allow us to achieve our goals and keep living as best as possible. Since information has the capacity to control the behaviours of individuals, power can be generated through the production and control of information. Today, a number of key scientific organizations and individuals are drunk with power as they are in positions to control what should be considered true or false. For the sake of resource management, and likely a dash of plain ol’ human greed, the pragmatic pressures of the world have shaped what was once a methodology into a machine that provides people with purported facts about reality. As a result, we are now battling an epistemic dragon driven by collecting more gold to sit on.

This suggests that the things we believe are extremely valuable to others around the world, in addition to being one of the most valuable things you possess. The information and perspective you can provide to others is valuable, either to the society you belong to or to those interested in seeing your society crumble. The adage about ideas “living rent free in your head” seems appropriate because cultural memes are causally effective; they shape the way you think and act and such, introduces a potential psychological harm. Critical thinking and introspection are important because they are processes which counteract the influence of other people, because by forcing individuals to dig deeper from their subjective point of view, one ends up consolidating and pruning their beliefs.

Collateral damage has shifted from bodies to minds and communities will continue to be torn apart until we develop a system for individuals to combat these external influences. Socrates has shown us that philosophical inquiry tends to irritate people, and the fact that mere scientific scepticism today is being met with ad hominems suggests we are on the right track. Remember, the goal is discourse rather than concrete answers, and an important component involves considering new and conflicting ideas. Be wary of what incentivizes other people but do not judge them for it. Compassion will be the most challenging part of this entire endeavour, but I believe in you.

Bayeux Tapestry Scene 52

Works Cited

Meerloo, Joost A. M. The Rape of the Mind:  The Psychology of Thought Control, Menticide, and Brainwashing. The World Publishing Company, 1956.