Author: Molly_G

Esoteric Theories of Consciousness: Panpsychism

This is a topic I have been wanting to write about for a long time, since reading Itzhak Bentov’s book Stalking the Wild Pendulum. Until I had read this book, I dismissed panpsychism as a crazy theory with no connection to reality. Briefly, panpsychism suggests that consciousness is a fundamental feature of the universe,1 and arises from a system’s functional organization.2 If a certain organization is achieved, the system is capable of phenomenal experiences like pain or the taste of lemon. Differing levels of consciousness can be explained by appealing to the complexity of this functional organization; for example, a flatworm is more conscious than a rock.3 Though popularized by David Chalmers in the 1990’s, some central tenets of panpsychism can be traced back to Spinoza4 and Leibniz.5 Within the multitude of versions of panpsychism, however, no connection to any scientific knowledge is made, making it difficult to support. I’m all for crazy theories but my interest is proportional to its relation to something we know or suspect about the world. Bentov’s discussion of consciousness, however, provides a means for connecting the theory to something adjacent our current scientific understanding. It’s just a lead though, and much more work must be done to establish a sufficiently robust theory. That said, even if a theory were to be established, even more work will be required to ensure Bentov’s ideas are correct. With respect to physics and Earth sciences, he may be mistaken about one or more aspects of the premises he uses to generate his theory. We will have to see.

In Chapter 5 of Stalking the Wild Pendulum, titled ‘Quantity and Quality of Consciousness’, Bentov begins with a definition of ‘consciousness’. Though simple, he claims that “it is the capacity of a system to respond to stimuli.”6 Previously, I have defined ‘consciousness’ as something like an awareness of internal and external environments, so it seems that Bentov and I are on the same page here. Bentov goes on to state that though we commonly think of these kinds of systems as nervous systems, this definition applies to all sorts of systems even non-living ones. He anthropomorphizes atoms by stating that they “respond” to stimuli like the presence of electromagnetic radiation, since their electrons jump into higher orbits as a result. His next examples involve bacteria and viruses which is less of a metaphorical stretch. He concludes that “the higher and more complex the organism, the more varied and the more numerous the responses per stimulus.”7 Sure, but an atom isn’t an organism, so I’m still not sure about that one, but I will grant the stretch to bacteria and maybe viruses. Anyway, he goes on to state that the quantity of consciousness refers to the number of responses exhibited for the same stimulus.

He does address the reader’s probable concern about atoms and states that “we may at first have trouble trying to visualize a rock or an atom as a living thing because we associate consciousness with life.”8 Right, and arguably for good reason, however, let’s see where this train of thought takes us. So for the sake of charitability and open-mindedness, I will grant him the stretch despite remaining sceptical. He thinks that our discomfort with this idea is a human limitation, and in a similar way, a rock might have trouble understanding human consciousness. To Bentov, “consciousness resides in matter”9 which starts to sound like panpsychism. Different amount of matter or mass contains differing amounts of consciousness, where quantity and quality is determined through “different evolutionary levels.”10

The quality of consciousness is associated with “the level of consciousness” determined by the “frequency-response range of the system.”11 By this, he refers to the range of frequencies an agent can detect; for example, human hearing exists within the range of about 20 or 30 Hz to about 20,000 Hz.12 Therefore, the wider the range, the higher the level of consciousness. Bentov also describes quality in terms of the intelligence of the response, or the number of responses the agent is capable of producing.13 He also describes this as refinement, where the agent is able to understand and possibly report on a number of aspects of experience, rather than a few. Consider the sommelier versus the novice wine-drinker; the sommelier is able to pick out notes of cherry, pine, or smoke, where the newbie simply tastes sweet or sour. Different organisms or agents are capable of differing levels of consciousness, where a worm or fruit fly is at a lower frequency-response range than an elephant or human.14 To illustrate these levels, Bentov supplies a handy diagram which I have recreated by hand:

The energy-exchange curves “show the extent of energy exchange between an entity and its environment” where “the maximum energy exchange, or interactions of [human] beings with their environment, occurs at the peak of the curve. It is [our] point of resonance with the environment.”15 He later states that “at the highest levels, this means control over the environment”16 which suggests that humans have more control over the environment than rocks and grasshoppers. Differing levels of consciousness ultimately mean differing realities. The reality the grasshopper experiences is simpler and less refined than the reality humans experience.

The astral level is the dream level, an idea which is not unique to Bentov. I have encountered this idea from reading about spirituality and ancient religions, and to some, is not all that foreign or esoteric. When we dream during REM sleep, we actually transcend our physical reality into a different reality, supposedly. This is why the energy-exchange curves overlap, because different levels of reality and control are not discrete or finite, and can overlap with each other to a degree.

In Chapter 6, Bentov explores “relative realities” by first considering plants. Like animals, they grow and reproduce, purportedly responding to human emotions as per the work of Peter Tompkins and Christopher Bird in The Secret Lives of Plants.17Maybe their studies don’t reproduce or rely on faulty assumptions, but let’s ignore that for a moment. Plants also move in response to stimuli; recall the sunflower as it follows the sun. From here, Bentov discusses animals and their characteristics, noting how these organisms can be viewed as a range of complexity given their stage in evolutionary development. There isn’t much said here that overly conflicts with biological perspectives, so I won’t go into detail here. He also discusses drugs and human experiences, as they allow us to access higher levels of consciousness, another uncontroversial perspective for the initiated. Next, Bentov discusses particle-wave duality to motivate his further discussion of consciousness.18 Most of us are vaguely familiar with this idea; photons can be considered as particles or waves. A particle, articulates Bentov, is a wave-packet, similar to a little “bubble” of waves confined to a specific space through individuation. Kind of like water in the form of rain versus in the form of a puddle. Within the body of water, like a puddle or ocean, water molecules are everywhere and unindividuated, but in rain, they become individuals as droplets.

Bentov now jumps to a discussion of individuation in nature.19 A mountain valley, roughly speaking, consists of a sheet of rock, acting like a continuum. Now within this valley, let’s say we have a massive boulder. Though once part of the valley, it has broken off to become its own “individual.” His next move is to claim that “matter is consciousness” only to add in parentheses that “matter contains consciousness,”20 a claim which overlaps with panpsychism. If there is a critical mass of matter, it is able to develop a “dim awareness of self.” It was Chalmers who stipulated that consciousness arises from functional organization, but it isn’t a stretch to consider that very simple organizations, like rocks, may still have some level of consciousness. Maybe they don’t feel pain or know the taste of lemon, but some form of qualia or consciousness theoretically exists even within simple organizations. Chalmers’ move was to say that certain qualia will appear when functional organizations are obtained. Returning to Bentov now, he states that “over millions of years this dim awareness may be strengthened into a sharper identity, possibly through interaction with other creatures. If an animal finds a hiding place within a rock formation, it will feel grateful to the rock for shelter, and the rock will feel it.” The animal’s consciousness, higher than the rock’s, “will give its [the rock’s] consciousness a push upward.”21 Eventually, this consciousness of the rock becomes “a spirit of the rock” and its energy-exchange curve will reach into the astral region.22 When a man stumbles upon this rock, he will feel or sense that there is something special about this rock, becoming impressed or impacted in some manner. Strong claims indeed.

As a quick aside, Bentov describes the ways in which our thoughts impact aspects of the environment. Because devices and instruments like EEGs can pick up electromagnetic magnetic energy emitted by neurons, and because no energy is lost within the universe but merely transferred, “it means that the energy of thought was broadcast in the form of electromagnetic waves, at the velocity of light into the environment and, finally, into the cosmos.”23 Moreover, since thought can be focused, it is possible to “send coherent thoughts” to be received by the person “for whom the thought was meant.”24 This applies to the rock as well. The rock, becoming more aware and ascending in quality of consciousness, is slowly able to increase its control over the environment. The example Bentov uses is a copper mine, where ceremonies held for the rock inspires it to perform certain actions.25 In Peru, he states, llamas would be sacrificed to protect miners, preventing a collapse of the mine. Otherwise, the rock is able to perform “tricks” like collapsing the mine onto humans digging for copper.

“In a way, all these minor gods or Nature spirits rely on the energy they get from others to keep themselves powerful. Just like politicians, their power and influence depends on their constituency. Eventually, as their constituency diminishes, they fade from the scene, as did the gods of ancient peoples, for examples, the Baal, the Moloch, the Greek and Roman gods, etc.”26

It’s interesting that Bentov chose those two gods to begin his list of examples… It’s even more interesting that poor Bentov would die as a result of the famous crash of American Airlines Flight 191.

Anyway, the chapter finishes with a discussion of the astral realm, communicating with the dead, and human creativity. I will let you read these bits yourself as they aren’t exactly related to this discussion of panpsychism. My goal here was to establish an interesting connection between panpsychism and some sort of further explanation of it, as uncovered in Bentov’s Stalking the Wild Pendulum. Though incomplete, it’s at least a lead for further study, as it overlaps with ancient religions and our current understanding of physics. Sure, some of Bentov’s claims are worthy of scepticism, however, I find this more compelling than panpsychism alone. Recall that one of the 7 Hermenutic Principles is that “all is mental, all is mind.” It’s a difficult idea to accept given our current veneration of science and empiricism, but maybe we have been mistaken. Maybe there is more to life than the material world. I don’t know for sure, however, my personal experiences have hinted at otherwise. You, dear reader, have your own experiences; listen to them. Remain open yet critical. The truth is somewhere in the middle.

Part two of this series investigates the Quantum Theory of Consciousness and Bentov’s discussion of quantum mechanics. I might also get into his ideas regarding Earth sciences too, depending on how much overlap is involved. Otherwise, it will be saved for a separate post. There is so much to say about these crazy theories, so I must break it up into separate entries. I’m also in the midst of reading Spinoza which has been very interesting and rewarding, and someday I will establish further connections between his philosophy and these strange ideas. For more info on panpsychism, check out this excellent YouTube video by Carneades.org.

neuralblender.com

Works Cited

1 William Seager, Theories of Consciousness : An Introduction and Assessment (Routledge, 2016), 287, https://doi.org/10.4324/9780203485583.

2 Seager, 288.

3 David Skrbina, Panpsychism in the West (The MIT Press, 2017), 11, https://doi.org/10.7551/mitpress/11084.001.0001.

4 Seager, Theories of Consciousness, 317.
5 Seager, 291.

6 Itzhak Bentov, Stalking the Wild Pendulum: On the Mechanics of Consciousness (Rochester, Vermont: Destiny Books, 1988), 77.


7 Bentov, 77.
8 Bentov, 78.
9 Bentov, 78.
10 Bentov, 78.
11 Bentov, 78.
12 Bentov, 79.
13 Bentov, 79.
14 Bentov, 85.
15 Bentov, 80.
16 Bentov, 83.
17 Bentov, 93–94.
18 Bentov, 97.
19 Bentov, 98.
20 Bentov, 98.
21 Bentov, 99.
22 Bentov, 99.
23 Bentov, 100.
24 Bentov, 101.
25 Bentov, 101.
26 Bentov, 102.

Quick Update

It’s hard to believe it is already December, this year seemed to fly by so quickly. Since my last post in late August, I’ve been primarily focusing on finishing up my thesis, which is why I haven’t added anything new here in quite a while. There are a few things I want to write about, but I really didn’t want to spend time working on them until I was closer to finishing. I have also been distracted with some sewing that I had put off for months, if not years. My job has inspired me to start up again, and of course, has also inspired me to begin working on additional projects. I am trying to finish what I have started before beginning something new, but that doesn’t always go very well. Eventually, photos will be posted. I’ve also been a little burnt out with writing, so the transition back to the material world has been a welcomed one.

Currently, I have a rough draft of my thesis ready for review, and in the new year, we will be setting a date to defend. I am very much looking forward to graduating so I can start thinking about a new YouTube video. It will probably be on my thesis, namely robot empathy and experience, but we’ll see. Many of these blog posts will also likely become videos in the future.

Throughout my research, a number of happy accidents and strange coincidences have occurred, with the latest one appearing a few months ago. I wanted to write on it back then, but it’s a minor point that isn’t all that ground-breaking today. It would have been when Hubert Dreyfus was writing on it, and essentially it’s this idea that there are two forms of knowledge: explicit or fact-based knowledge, and what Dreyfus calls know-how.1 I had written a post on the idea of belief involving a relationship, citing an Iain McGilchrist’s paper which distinguishes between the French terms savior and connaître. Here again, reading Dreyfus’s What Computers Still Can’t Do, I see this distinction being made. Dreyfus’s argument is that it is impossible or nearly impossible to transform know-how into explicit knowledge which could be programmed into a computer. He appeals to phenomenology to demonstrate why this is the case, specifically that human life is organized around subjective experiences of the world. In a nutshell. The idea is that we ultimately act according to a world we have unconsciously or subconsciously modeled internally based on how it appears to the experiencing individual. For example, I don’t think about placing one foot in front of another as I walk, my attention is instead on what I want to eat later, and walking is merely an automatic skill I acquired as a baby. These kinds of skills are difficult to program into a computer using symbolic-reasoning, says Dreyfus, so an approach involving robots or AIs which learn from experience is a better path to take.

There isn’t much else to say on this distinction, other than it appearing again months after my post on McGilchrist. Connecting this idea of know-how back to belief, where belief “emerges through commitment and experience,”2 we entrust that our bodies and the bodies of others can perform some action or feat. For example, even if I have never made a certain recipe before, I believe I can do it, or I believe that my Grandma can make it, given our experiences with other recipes. The truth of this becomes apparent when I “make a move”3 to give it a go; it either turns out alright or it doesn’t.

Anyway, the post I am working on currently is a connection between some ideas about consciousness discussed in Bentov’s Stalking the Wild Pendulum and panpsychism. There is a neat overlap that can be used to create an explanation for the feasibility of panpsychism, depending on how much Bentov’s ideas resonate with you. I used to be very dismissive of panpsychism until I read Bentov’s book, and so in an attempt to be charitable to some crazy ideas surrounding consciousness, I would like to explore these ideas with an open mind. Many people will not appreciate the degree of speculation and conjecture involved, but hey, life is short and very strange at times, so why not.

Works Cited

1 Hubert L. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge, Mass: MIT Press, 1992), xi.

2 Iain McGilchrist, “Cerebral Lateralization and Religion: A Phenomenological Approach,” Religion, Brain & Behavior 9, no. 4 (October 2, 2019): 328, https://doi.org/10.1080/2153599X.2019.1604411.

3 McGilchrist, 329.

Semantics and Syntax

Another missing puzzle piece has come to my attention. It has been difficult to articulate why robots like iCub don’t understand the world around them, as it seems its body provides sensory data to ground the meanings of words like ‘cup’ and ‘shovel’. The cameras which operate as its eyes take in visual information to be associated with language, and yet Dr. Haikonen repeatedly stresses that they do not understand what the words mean. Computers, including the one controlling iCub’s body, only work with syntax, or the rules which govern how sentences are formed. As such, incoming sensory data doesn’t really ground the words it learns because its sensory data just consists of more symbols; it’s just symbols all the way down. The 1’s and 0’s which constitute its sensory data do not contain any semantic information about words, since these aren’t experiences but simply representations of experiences.

Aren’t bodily sensations just neural representations of external stimuli, and thus consisting of representations made up of binary values? Perhaps, but animal physiology is physically and functionally arranged in a manner which generates meaning; what does the bee sting mean to me, as a living body capable of being damaged and ultimately dying? For a robot like iCub, damage means nothing, it doesn’t care whether its arm is removed or it gets hit in the head. Its architecture doesn’t allow for meaning to be generated, it’s just a computer that looks like a human. Could iCub’s architecture be modified in such a way which includes pain signals to generate meaning? Arguably no, for reasons I will now attempt to articulate. I’m still working on a satisfactory explanation.

The passage which caught my attention is from Dr. Haikonen’s book The Cognitive Approach to Conscious Machines. I will provide the block quote because summarizing it would remove its flavour, an essence which helps illustrate the issue I’ve been wrestling with.

I make here a bold generalization and claim that syntax cannot be completely separated from semantics. Every now and then vertical grounding to inner imagery and percepts of the external world is needed in order to resolve the meaning of a sentence. Therefore proper perception of external world entities and their relationships and the formation of respective inner representations are absolutely necessary. A syntactic sentence is a structured description of a given situation. If the system is not able to produce and properly bind all the required percepts then it will not be able to produce a proper verbal description either and vice versa, the system will not be able to decode the respective sentence. Artificially imposed rules alone will not solve every problem here.1

To clarify that last sentence, this is the case due to reasons Rosen discusses, that formal systems, in this case syntax, create an abstraction from the natural systems referred to in semantics. Rules alone do not completely represent the natural systems they aim to model. As Haikonen states, syntax provides a structured description of a situation, where these situations are necessarily part of natural systems. By “vertical grounding,” Haikonen means the associations between percepts or sensory information and the words used to describe them.

Immediately I can see that indeed, semantics cannot be completely represented by syntax or formal rules, given my familiarity with Rosen’s explanation in Anticipatory Systems. I can also see, however, a hoard of philosophy of language professors taking issue with this claim. Which ones I do not know, I only know one such professor and he may not be sure whether the generalization is true, or in which cases it might be false. I’ll have to ask him and see what he says. Either way, it is a bold claim and a difficult one to explain; Rosen appeals to Category Theory from pure mathematics to do so, but for those who are uninterested or put off by mathematical theory, it might not provide a very compelling explanation.

Computerized robots like iCub are rule-based structures all the way down, and even if they were to be provided with a means to care about its own body and survival, its interest would remain a simulation. The numerical representations which generate its “experiences” are fundamentally distinct from the analogue representations generated by the human body. By analogue, I mean the various physiological systems which give rise to phenomenal experience. After all, feelings of hunger may be represented by neuronal activity, but they are ultimately generated from the hormone ghrelin. Since hormones are chemical messengers carried throughout the body via the bloodstream, their functionality is distinct from neural networks.2

The reason is because analogue signals are continuous streams of information,3 where continuous refers to the numerical values between whole integers, such as infinitesimal fractions or decimal values. In contrast, modern computers use symbolic or representational methods to generate behaviours, where digital channels pass streams of information which are discrete without intermediate values between whole integers.4 Consequently, digital machines count quantities while analogue machines measure quantities.5 The distinction is significant, and though the all-or-nothing firing patterns of neurons can be represented by binary values, the body and its phenomenal experiences cannot be fully reduced to a collection of neural signals.

So what is this missing piece I stumbled upon? “If the system is not able to produce and properly bind all the required percepts then it will not be able to produce a proper verbal description either and vice versa…” The words and sentences we say have meaning because the rules of language are coupled with the meanings of the words we use. iCub may state “the truck is red” but it doesn’t understand because its version of ‘red’ is without semantic content. The word ‘red’ is not fully grounded because the sensations it appeals to are still symbols made up of discrete values. Instead, analogue signals are required to fully capture what-it-is-like to experience some stimulus, where these signals influence the system’s self-organizing behaviour for the sake of continued survival.

neuralblender.com

Works Cited

1 Pentti O. Haikonen, The Cognitive Approach to Conscious Machines (UK: Imprint Academic, 2003), 238–39.

2 Mark Andrew Krause et al., An Introduction to Psychological Science: Modeling Scientific Literacy (Pearson Education Canada, 2014), 102.

3 John Johnston, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (The MIT Press, 2008), 28, https://doi.org/10.7551/mitpress/9780262101264.001.0001.

4 Johnston, 28.

5 Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society, 2d ed. rev. (Garden City, NY: Doubleday, 1954), 64.

Integrated Information Theory of Consciousness

The Integrated Information Theory (IIT) of Consciousness is a theory originally proposed by Giulio Tononi which has since been further developed by other researchers, including Christof Koch. It aims to explain why some physical processes generate subjective experiences while others do not, and why certain regions of the brain, like the neocortex, are associated with these experiences.1 Evidently, he appeals to information theory, a technical domain which uses mathematics to determine the amount of entropy or uncertainty within a process or system.2 Less uncertainty means more information, where complex systems like humans and animals contain more information than simpler systems like an ant or a camera. Relationships between information are generated from a “complex of elements”3 and when a number of relationships are established, we see greater amounts of integration.4 Tononi states “…to generate consciousness, a physical system must be able to discriminate among a large repertoire of states (information) and it must be unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts (integration).”5 This measure of integration is symbolized by the Greek letter Φ (phi) because the line in the middle of the letter stands for ‘information’ and the circle around it indicates ‘integration’.6 More lines and circles!

In addition to considering the quantity of information generated by the system, IIT also considers the quality of this information generated by its mechanisms. Both attributes determine the quality of an experience. This experience can be conceived of as a “shape” in a qualia space made up of elements and the connections between them.7 Each possible state of the system is considered an axis of the qualia space, each of which is associated with a probability of actually existing as that state. A particular quale consists of a shape in this state space, specifying the quality of an experience. Therefore, viewing a red object results in a particular state and shape in the qualia space, a mathematical object supposedly representing neuronal activity.8 As such, Tononi claims that his theory provides a way to describe phenomenology in terms of mathematics.9 Sure, however, this doesn’t really explain much about consciousness or qualia, it just provides a mathematical description of it.

In later publications, he attempts to clarify this theory a bit further. Rather than appealing to activity in the brain, his theory “starts from the essential phenomenal properties of experience, or axioms, and infers postulates about the characteristics that are required of its physical substrate.”10 The reason is because subjective experiences exist intrinsically and are structured in terms of cause-and-effect in some physical substrate like a brain. Experiences, therefore, are identical to a conceptual structure, one expressible in mathematics.11 By starting from axioms, which are self-evident essential properties, IIT “translates them into the necessary and sufficient conditions” for the physical matter which gives rise to consciousness and experience.12

Not satisfied? I hear ya barking, big dog. When I first heard about it, I was intrigued by the concept but was ultimately unimpressed because it doesn’t explain anything. What do we mean by ‘explain’? It’s one of those philosophically dense concepts to fully articulate, however, a dictionary definition can give us a vague idea. By ‘explain’, we mean some discussion which gives a reason or cause for something, demonstrating a “logical development or relationships of” the phenomenon in question,13 usually in terms of something else. For example, the reason it is sunny right now is because the present cloud coverage is insufficient for dampening the light coming from the sun. Here, ‘sunny’ is explained in terms of cloud coverage.

We are not alone in our dissatisfaction with IIT. On Sept. 16th 2023, Stephen Fleming et al. published a scathing article calling IIT pseudoscience.14 The reason is because IIT is “untestable, unscientific, ‘magicalist’, or a ‘departure from science as we know it’” because the theory can apply to many different systems, like plants and lab-generated organoids.15 They state that until the theory is empirically testable, the label of ‘pseudoscience’ should apply to prevent misleading the public. The implications of IIT can have real-world effects, shaping the minds of the public about which kinds of systems are conscious and which are not, for example, robots and AI chatbots.

One of the authors of this article would go on to publish a longer essay on the topic to a preprint server on Nov. 30th that same year. Keith Frankish reiterates the concerns of the original article and further explains the issues surrounding IIT. To summarize, the axiomatic method IIT employs is “an anomalous way of doing science” because the axioms are not founded on nor supported by observations.16 Instead, they appeal to introspection, an approach which has historically been dismissed or ridiculed by scientists because experiences cannot be externally verified.The introspective approach is one which belongs to the domain of philosophy, more akin to phenomenology than to science. Frankish grants that IIT could be a metaphysical theory, like panpsychism, but if this is the case, it is misleading to call it science.17 If IIT proponents insist that it is a science, well, then it becomes pseudoscience.

As a metaphysical theory, I’m of the opinion that it isn’t all that great. It doesn’t add anything to our understanding because the mathematical theory is rather complex and doesn’t provide a method for associating itself with scientific domains like neuroscience or evolutionary biology. It attempts to, however, it’s explanatorily unsatisfactory.

That said, the general idea of “integrated information” for consciousness isn’t exactly wrong. My perspective on consciousness, based on empirical data, is that consciousness is a property of organisms, not of brains. There are no neural correlates of consciousness because it emerges from the entire body as a self-organizing whole. It can be considered a Gestalt which arises from all of our sensory mechanisms and attentional processes for the sake of keeping the individual alive in dynamic environments. While the contents of subjective experience are private and unverifiable to others, that doesn’t make them any less real than the sun or gravity. They can be incorrect, as in the case of illusions and hallucinations, however, the experiences as experiences are very real to the subject experiencing them. They may not be derived from sense data portraying some element of the natural world, as in the cases of visual illusions, however, there is nonetheless some physical cause for them as experiences. For example, the bending of light creates a mirage; the ingestion of a substance with psychoactive effects creates hallucinations. The experiences are real, however, their referents may not exist as an aspect of the external world, and may just be an artifact of other neural or physiological processes.

I’ve been thinking about this for many years now, and since the articles calling IIT pseudoscience were published, have been thinking some more. Hence why I’m a bit “late to the game” on discussing it. Anyway, once I graduate from the PhD program, I’ll begin work on a book which explains my thoughts on consciousness in further detail, appealing to empirical evidence to back up my claims. I have written an extensive discussion on qualia, accompanied by a video, aiming to present a theory of subjective experiences from a perspective which takes scientific findings into consideration.

My sense is that, for a long time, our inability to resolve the issues surrounding qualia and consciousness was a product of academia. We’re so focused on specialization that the ability to incorporate findings and ideas from other domains is lost on many individuals, or is just not of interest to them. I hope we are slowly getting over this issue, especially with respect to consciousness, as philosophy of mind has a lot to learn from other domains like neuroscience, psychology, cognitive science, and evolutionary biology, just to name a few.

Consciousness is a property of organisms like humans and animals for detecting features of the environment. It comes in degrees; a sea sponge is minimally conscious, while a gecko is comparatively more aware of its surroundings. Many birds and mammals demonstrate a capacity for relatively high-level consciousness and thus intelligence. Obviously humans are at the top of this pyramid, given our mastery over aspects of our world as seen in our technological advancements. Consciousness, as an organismic-level property, emerges from the coordination and integration of various physiological subsystems, from systems of organs to specific organs and tissues, all the way down to cells and cellular organelles. It is explained by the interactions of these subsystems, however, cannot be causally reduced to them. Though the brain clearly plays an important role for consciousness and subjective experiences, it is a mistake to be looking for causal properties of consciousness in the brain, like a region or circuit. Consciousness is an emergent property of bodies embedded within a wider physical environment.

From this perspective, we can and have developed an analogue of consciousness for machines, as per the work18 of Dr. Pentti Haikonen. The good news is that because this machine doesn’t use a computer or software, you don’t need to worry about current AIs becoming conscious and “taking over the world” or outsmarting humans. It physically isn’t possible, and the recent discussions I’ve posted aim to articulate ontologically why this is the case. You ought to be far more afraid of people and companies, as explained by this excellent video from the YouTube channel Internet of Bugs.

Lastly, I want to extend a big Thank You to Dr. John Campbell for inspiring me to work on this explanation of consciousness, as per the helpful comment he left me on my qualia video. I recommend following Dr. Campbell on YouTube, he is a fantastic researcher and educator, in addition to being an honest, critically-thinking gentleman who covers many interesting topics related to healthcare.

A Scholar in his Study by Thomas Wijck (1616 – 1677)

Works Cited

1 Giulio Tononi, “Consciousness as Integrated Information: A Provisional Manifesto,” The Biological Bulletin 215, no. 3 (December 1, 2008): 216, https://doi.org/10.2307/25470707.

2 Tononi, 217; Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 17, https://doi.org/10.7551/mitpress/11810.001.0001; C. E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal 27, no. 3 (July 1948): 393, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.

3 Tononi, “Consciousness as Integrated Information,” 217.
4 Tononi, 219.
5 Tononi, 219.
6 Tononi, 220.
7 Tononi, 224.
8 Tononi, 228.
9 Tononi, 229.

10 Giulio Tononi et al., “Integrated Information Theory: From Consciousness to Its Physical Substrate,” Nature Reviews Neuroscience 17, no. 7 (July 2016): 450, https://doi.org/10.1038/nrn.2016.44.

11 Tononi et al., 452.
12 Tononi et al., 460.

13 “Explain,” in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed August 10, 2024, https://www.merriam-webster.com/dictionary/explain.

14 Stephen Fleming et al., “The Integrated Information Theory of Consciousness as Pseudoscience” (PsyArXiv, September 16, 2023), https://doi.org/10.31234/osf.io/zsr78.

15 Fleming et al., 2.

16 Keith Frankish, “Integrated Information Theory: Pseudoscience or Appropriately Anomalous Science?” (OSF, November 30, 2023), 1–2, https://doi.org/10.31234/osf.io/uscwt.

17 Frankish, 5.

18 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007); Pentti O Haikonen, “Qualia and Conscious Machines,” International Journal of Machine Consciousness, April 6, 2012, https://doi.org/10.1142/S1793843009000207; Pentti O Haikonen, Consciousness and Robot Sentience, vol. 2, Series on Machine Consciousness (World Scientific, 2012), https://doi.org/10.1142/8486.

Virtuous Circles

Prior to the advent of artificial intelligence, cybernetics introduced a theory for the development of autonomous agents through functional circularity. These systems use their own outputs as inputs to generate feedback loops1 to create a self-regulating process which is able to maintain autonomy and stability.2 A simple example is the thermostat which uses sensors to measure air temperature.3 The reading is compared to the set threshold, and heat generated if the value is below the desired temperature. Once the heat has increased to meet the threshold, the thermostat detects the change and shuts off heat production. If the temperature drops below the threshold, the thermostat repeats the process automatically.

Since ancient Greece, circular causality was considered to be problematic given its tendency to produce logical paradoxes.4 One example is “This statement is false” because if the entire statement is true, it contradicts itself. If the statement is evaluated as false, this again produces a contradiction because the statement is indeed true, given that it declares itself to be false. In this instance of self-reference, the contradiction cannot be resolved.

As mentioned in a previous post, some instances of self-reference are not paradoxical, particularly with respect to living organisms. A cat can learn to pounce by noting the difference between its desired end-state, to leap on top of a toy for example, and its actual end-state, which falls short and misses it. The reason this doesn’t produce a paradox is because the cat, as a natural system, contains far more complexity than the statement above. Visual and tactile cues can be reinterpreted by its relatively sophisticated brain to adjust its muscle movement on the next jump, reducing the error until it lands successfully.

The cat contains parts which act as wholes, while the statement does not; its parts are simple and concrete. The nervous system can be considered as a whole, acting as a functional unit within a larger system, the animal as a self-contained individual. The statement, however, generates a paradox because it hits a dead-end so-to-speak, as there isn’t any additional functionality to resolve the act of self-reference. Essentially, it contains two parts: the statement and a binary value.

The argument I am proposing is that formal systems, even complex ones, are two-dimensional while natural systems, even simple ones, are three-dimensional. The two dimensions include structure and information; in the paradox example above, the structure refers to the statement and the information is the binary value. Natural systems, however, include structure, information, and energy. This third dimension, energy, is added as a consequence of the type of structure involved. I’m still working on this part and I might change my mind on this.

I can see why esoteric wisdom is presented in metaphor; it’s difficult to articulate abstract metaphysical ideas. Following suit, let’s think about a circle in two dimensions versus three dimensions.

A line segment which loops back upon itself creates a circle. The metaphor is a snake which eats its own tail and dies of starvation; the paradoxical statement “dies” as a result of the contradiction it generates. It cannot be true and not true, breaking itself in the process.

Ouroboros photo by Leo Reynolds

Fun fact: you can save a snake from starving to death in this situation by holding up a vessel of strong rubbing alcohol to its nose. This triggers its gag reflex which frees its tail.

In three dimensions, however, we gain a new axis, one which allows for upward movement. The circle becomes a spiral as it gains height in this new dimension. Though we are back to where we started, something is added and can thus look down to see the vertical distance traveled. Alternatively, the spiral can descend, coming full circle but on a lower plane. The former is considered a virtuous circle while the latter is a vicious circle.

Vizcaya Museum and Gardens in Miami, Florida
Photo by Mary Mark Ockerbloom

These examples may abstractly illustrate the point, but how are we to explain this using normal physics? The answer seems to reside in levels created by nested systems, creating irreducible parts to be leveraged in cases of self-reference.

For example, the nervous system is an integrated whole5 which responds to the organism’s own actions. Selecting a particular pathway provides a linear input-output process, for example, from sensory mechanisms in the skin, through the nerves in the spinal cord, into the primary sensory cortex in the brain, to the motor cortex in the brain, and back down again to the hand. The system can be represented as a simpler interaction but it cannot be reduced to a simpler system. As a unit, the nervous system contains additional functionality which the the parts do not possess. Leveraging other aspects of the nervous system, say visual information, provides a “way out” for any situation which may cause a paradox of sorts.

At least this is what I’m thinking for now. A complex whole comprised of subsystems can account for self-reference. Can a formal system, even comprised of a number of complex subsystems, account for self-reference? Perhaps in some cases, if the system is “looking inward” from a broader scope than the thing it is referring to. In cases where self-reference fails, it might be due to a need to “move outward” and reference something beyond the current scope. It might also be due to a logical paradox from within, as seen in Gödel’s Incompleteness Theorem, or perhaps due to the fact that formal systems cannot account for semantics. I’m not exactly sure at this point. I am going to keep working on this but for now, here’s an attempt at clarification.

neuralblender.com

Works Cited

1 Francis Heylighen and Cliff Joslyn, “Cybernetics and Second-Order Cybernetics,” in Encyclopedia of Physical Science and Technology (Third Edition), ed. Robert A. Meyers (New York: Academic Press, 2003), 160, https://doi.org/10.1016/B0-12-227410-5/00161-7.

2 John Johnston, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (The MIT Press, 2008), 26, https://doi.org/10.7551/mitpress/9780262101264.001.0001.

3 Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 131, https://doi.org/10.7551/mitpress/11810.001.0001.

4 Thomas Fischer and Christiane M. Herr, “An Introduction to Design Cybernetics,” in Design Cybernetics: Navigating the New, ed. Thomas Fischer and Christiane M. Herr (Cham, Switzerland: Springer International Publishing, 2019), 2, https://doi.org/10.1007/978-3-030-18557-2_1.

5 Wiener, Cybernetics, 13.

Whitepill 5: Collapse

Part of a series on ways to survive this dystopian nightmare

It has been predicted by an infamous individual that our future holds two possible scenarios: societal or civilizational collapse, or technological enslavement à la Brave New World. If you have not read this book, I recommend you make it a priority.

It seems that as a global society, we are too incompetent to enact a Brave New World -like scenario. There are not enough individuals with the requisite know-how to move us into this outcome, and though the direction necessary to achieve this outcome is underway, it is unlikely to materialize. A variety of factors, culminating over the 20th century and primarily in Western countries, have undermined the fostering of intelligence and creativity needed to develop novel technologies and innovative solutions necessary for a BNW outcome.

Around the turn of the millennium, we stopped developing technological solutions for the sake of solving problems, and instead shifted to finding solutions which sought maximum profit. Nothing has changed since and has probably only worsened. Corruption has bled into so many domains that we cannot move forward in any technologically meaningful way because the wrong people are in power or in key positions of leadership. Those who are capable of making real technological progress and contributions are either bogged down by bureaucratic mud or are disillusioned and give up.

Arguably, the theory necessary to develop and implement technological change took a back seat to pragmatic concerns too early in the Digital Revolution. Today, corruption and incompetence run the show. Though we may have some forms of “advanced technology” like deep learning and genetic modification, we are unable to really implement it to bring about a BNW outcome. Sure, we may see further movement toward this outcome in isolated incidents, however, on a larger scale, it is unlikely to take hold.

This corruption/competency crisis is why the Great Reset1 failed. Consequently, it enabled a critical mass to become aware of the plan to usher in a BNW outcome. Now we have a fighting chance and it seems we are making gains. For example, the Digital Travel Credentials pilot project between Canada and the Netherlands, which used facial recognition,2 has been closed.3 We may see similar projects launch in the future but for now, it indicates a lack of forward movement which is good news.

The reason we are headed to a collapse is due to the instability of the system in which we live. This idea was originally presented in 1948 by mathematician Norbert Wiener, a father of cybernetics, in his book Cybernetcs: or Control and Communication in the Animal and the Machine. In general, cybernetics is interested in communication and feedback processes but Wiener extends his discussion into some interesting directions, one of which is politics. He states “one of the most surprising facts about the body politic is its extreme lack of efficient homeostatic processes”4 where ‘homeostatic processes’ refers to functionality which achieves a state of equilibrium.5 While some believe the “free market” is a homeostatic process, it’s actually a game and therefore follows “the general theory of games, developed by von Neumann and Morgenstern.”6 This means that every player acts in accordance with the information available to him at the time which maximizes reward. With two players, “the theory is complicated,”7 but with three or more players, “the result is is one of extreme indeterminacy and instability.”8 Though players may form coalitions, Wiener states that these coalitions do not lead to a degree of determinism but instead “usually terminate in a welter of betrayal, turncoatism, and deception” which is also seen in business, politics, diplomacy, and war.9 Though we may aim for peace and stability, Wiener stresses that before long, someone is bound to break the agreement and cease cooperating.

The reason is plain old human psychology and a tendency to focus on factors which are irrelevant or inconsequential to the task at hand. Specifically, “…there are always the statisticians, sociologists, and economists available to sell their services to these undertakings.”10 He goes on to say that in small groups, homeostasis is easier to achieve as fewer individual must work together, however, when groups become larger, “ruthlessness can reach is most sublime levels.”11

Moreover, “of all of these anti-homeostatic factors in society, the control of the means of communication is the most effective and most important.”12 He admits that one of the lessons of his book is to remind us that societies are held together by the “possession of means for the acquisition, use, retention, and transmission of information.”13 The reason is because the means of communication, whether it be through newspapers, radios, movies, schools, or churches,14 are dependent on funding. As a result, the actions which draw in the most revenue, like sensationalist stories or click-bait, are selected for and end up corrupting the media groups which provide information to the masses. Therefore, the “game of power and money” is one of the most anti-homeostatic elements in society.15 Given this instability, the game will inevitably end.

How quickly the collapse occurs is anybody’s guess. It could take decades or it could take days, it just depends on the factors involved. Que será, será.

Surfer in Santa Cruz, California
Wikimedia Commons Picture of the Day on Aug 4, 2024

Works Cited

1 Klaus Schwab, “Now Is the Time for a ‘Great Reset,’” World Economic Forum (blog), June 3, 2020, https://www.weforum.org/agenda/2020/06/now-is-the-time-for-a-great-reset/.

2 “Debates No. 335 – June 19, 2024 (44-1),” June 19, 2024, Question No. 2686(e), https://www.ourcommons.ca/DocumentViewer/en/44-1/house/sitting-335/hansard.

3 “Debates No. 335 – June 19, 2024 (44-1),” Question No. 2686(h).

4 Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 220, https://doi.org/10.7551/mitpress/11810.001.0001.

5 “Homeostasis,” in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed August 3, 2024, https://www.merriam-webster.com/dictionary/homeostasis.

6 Wiener, Cybernetics, 220.
7 Wiener, 220.
8 Wiener, 221.
9 Wiener, 221.
10 Wiener, 222.
11 Wiener, 223.
12 Wiener, 223.
13 Wiener, 223.
14 Wiener, 223.
15 Wiener, 224.

Self-Reference

I am reading Autopoiesis and Cognition by Humberto Maturana and Fransisco Varela for my thesis, and a significant connection has leapt out to me from page 10. This section is written by Maturana, and his fourth point about living systems states:

“Due to the circular nature of its organization a living system has a self-referring domain of interactions (it is a self-referring system), and its condition of being a unit of interactions is maintained because its organization has functional significance only in relation to the maintenance of its circularity and defines its domain of interactions accordingly.”

This passage expands upon the nugget of wisdom supplied by Kurt Gödel as appealed to by Robert Rosen. Recall that Gödel was able to conclude that mathematics is incomplete from the use of self-reference, as a contradiction can be generated within a set of meta-mathematical statements. Although Rosen appeals to syntax and semantics in Anticipatory Systems, the broader sense is about the differences between natural systems and formal systems. My ultimate goal is to articulate this relationship and its implications in more general terms, with a particular focus on comparing AI and machines to humans and animals. So far, I’ve been able to sketch some themes and ideas in relation to Rosen and this relationship, and much more work is required to be able to put into words the ideas which only exist as intuitions. For now, however, I will document the process of how this all comes together because the externalization of ideas will foster their articulation.

Though Rosen appeals to language, language is merely an attempt at portraying elements of the world as understood by its author or speaker. Maturana’s passage is the missing link in a wider explanation of the phenomenon in question. Where does this incompleteness come from? Why is it that AI cannot ontologically compete with human intellect? The answer has to do with scope and the way wholes can be greater than the sum of their parts.

In biology, organisms are made up of various self-organizing processes which aim to support the continued survival of the individual. Although comprised of nested levels of physiological processes, a person is greater than the sum total of his physicality. In some ways, the idea of self is highly complex and philosophically dense, but seen through the lens of biology, the self refers to an individual as contained by its own body. All living things have a boundary for which processes take place inside, delineating it from the rest of its environment as a unit. Arguably, the nervous system evolved to provide individuals with information about its internal and external environments for the sake of continued survival. By responding to changes in the environment, the individual can take actions which mitigate these changes.

Physiological processes can be described by a sequential series of steps or actions taken within some system. In Rosen’s terms, a formal system can be generated from a natural system, however, it generates an abstraction which ignores all but the elements necessary for producing some outcome or end state. For example, when it comes to predicting tomorrow’s temperature, some geological elements will be taken into consideration, such as wind patterns and atmospheric moisture levels, however, other aspects of the Earth can be ignored as they don’t influence how temperatures manifest. Perhaps something related to plate tectonics or spruce tree populations. When scientists generate weather and climate models, they only include variables which impact the systems they are interested in studying. The model, as described by mathematics, can be seen as a set of relations and calculations which provides an output, and in this way, exists as a sequence of steps to be taken. If one were to write out these steps, they’d have something which resembles an algorithm or piece of computer code.

If additional information is required which has not been accounted for the model, it would therefore be inaccessible as it remains beyond the scope of the existing model. In some cases, the model can be expanded to include this variable, say including spruce tree populations, however, Rosen’s point is that no amount of augmentation will provide a model which completely represents the natural system in question. It will always contain aspects which cannot be properly accounted for by formal systems, and the example he uses is semantics. This becomes apparent with indexicals, as ‘me’ or ‘today’ is rather difficult to articulate without appealing to the wider context or situation where it is used. To understand when or who is being referred to, the interpreter must appeal to their knowledge and understanding to fill in the blank, moving beyond the words themselves.

These ideas of circularity and sequential steps had me thinking of the rod and the ring again. I made a connection to this apparent duality in another post; lo and behold, here it is again. In fact, I’ve made reference to a number of blog entries within this very post, and as such, we see self-organization and coalescence here too. All of these writings, however, are made up of a series of passages, sentences which attempt to present ideas in a sequential form. As a relatively formless mass, for now at least, the ideas presented here and in other posts currently exist as a nebulous collection of related topics. One day, I hope to turn it into a more linear and organized argument which doesn’t frustrate the reader as much as it surely does now. “Where are you going with this…??” Something to do with nested systems, parts and wholes, and how self-organizing systems can be described as a series of linear steps without being reducible to them.

How to expand outward beyond the current scope? Self-reflection. In fact, our capacity for self-reflection was probably made possible from our social nature. Others act as a mirror for which we can see ourselves through the eyes of someone else. The mirror-image is metaphorically reversed though, as we see ourselves from a new perspective, one coming from the outside-in rather than the inside-out. I’ve been thinking about Kant’s transcendental self lately but this is a topic for another day.

All of this is for an argument about why we shouldn’t give robots and AIs rights and legal considerations. They are simply not the kinds of things which are deserving of rights because they are functionally distinct from humans, animals, and other living beings. Their essential nature is linear and sequential, not autopoietic. This distinction is not just other but ontologically lesser, a reduction arising from formal systems and human creation. As such, they pale in comparison to the complex systems observed in nature.

Belief is a Relationship

Like many philosophy students, the list of books I would like to read is quite long and only continues to grow. One of them is Iain McGilchrist’s The Master and His Emissary which discusses the difference between brain hemispheres and the specializations of each. He’s also written papers on the subject, one of which being ‘Cerebral Lateralization and Religion: a phenomenological approach’ which I have read. Overall, it’s very interesting but there is a particular section which struck me as rather significant for epistemology and mental health.

On page 328, under the heading ‘Knowledge, belief, and truth’, McGilchrist discusses the different kinds of knowledge handled by each hemisphere. While the left hemisphere specializes in collecting bits and pieces of information from a “general, impersonal, … and disengaged stance,” the right hemisphere specializes in uncertain, personal, and experiential knowledge which “resists generalization.”1 In this case, “the whole is not best understood by summing the parts.” He mentions this distinction is similar to the difference between the French terms savior and connaître, as although both of these terms directly translate to ‘knowledge’, the kind of knowledge they refer to is unique. One refers to an experiential knowledge while the other refers to propositional knowledge. The German language also notes this distinction with the words wissen and kennen.

McGilchrist goes on to explain how ‘belief’ is also subject to this differentiation. Though many use this word to refer to cognition and propositional knowledge, the etymological root of the term uncovers a kind of experiential knowledge. Particularly, ‘lief’ in Middle English describes a person who is “beloved, esteemed, dear”2 or, as McGilchrist states, as “someone in whom one believed.” Similarly, in German, the word ‘lieben’ means “to love.” Furthermore, the French word for ‘belief’ is croire, as derived from the Latin term credere, meaning to “entrust to the care of.” McGilchrist states that “belief is about a relationship” where the “believer needs to be disposed to love, but the believed-in needs to inspire another’s belief.” This cannot be determined in advance but instead “emerges through commitment and experience.”

In contemporary uses, ‘belief’ often indicates an uncertainty about truth, however, this reconceptualization is a relatively recent one. McGilchrist states that “belief does imply truth” and appeals to the German term treu which means ‘faithful’ and is also related to ‘trust’. The relationship he points out here is one characterized by trusting another, where one believes in another, and as such, trusts in them. Truth and belief are relational, deriving value from the context in which they are used or appealed to, in addition to being embodied and actively involving commitment. Today, however, we often think of ‘truth’ and ‘belief’ as detached and disembodied, where ‘truth’ is independent of our own selves, “immutable and certain.” McGilchrist characterizes this shift as an understanding rooted in right-hemispheric thinking to a left-hemispheric one, and he warns that “belief and truth cannot always be achieved by simply sitting back and waiting passively for information to accumulate.”3 Instead, “some truths become understandable only when we have made a move to meet them.” [emphasis added]

So to summarize, both ‘knowledge’ and ‘belief’ come in two different flavours: one which is propositional and cognitive, and one which is experiential and relational. ‘Belief’ is not a weaker version of knowledge but an outcome of an activity grounded in love and acceptance. It is relational, as these feelings or dispositions arise from the interaction between the person who believes and the thing they believe in, uncovering or identifying truths from this committed relationship. This thing to be believed in may be another person, however, it also applies to the self. By accepting and appreciating your own thoughts and feelings as worthy of attention and consideration, we build up an understanding of ourselves as individuals, allowing us to realize our potential. If I believe I will graduate, I trust that I will take the steps necessary to complete my project and sufficiently defend it. I trust myself because I have accepted my strengths and weaknesses, allowing me to push forward when challenges arise.

In spiritual or religious contexts, this relationship is oriented outward to a domain or entity residing beyond the material world, however, it can also refer to a relationship to oneself. In Gnostic traditions, generally speaking, individuals come to know a divine or non-material domain only when one turns their attention inward to reflect on experience and understanding. In this way, a weaker form of ‘belief’, perhaps glibly characterized by a blind faith in some divine force or entity, can be strengthened by relying on one’s own knowledge and understanding to form a bridge into the world of the immaterial and unknown. By going through oneself, individuals can access a world beyond the physically experienced one to uncover truths which would otherwise be occluded by the physical world and its various authorities. Occult knowledge may be purposefully hidden, however, it seems this may simply reflect the reality of where this knowledge naturally resides. To reach this domain, the path one must take is through a healthy relationship with the self, where the beginning of this path is in acceptance and the analysis of one’s experiences and understanding.

The reason I wanted to discuss this segment from McGilchrist’s paper is because it highlights a fallacy in our modern, scientific world-view, one which suggests that truth is to be found from without. Certainly there are instances where this is the case, as the rate of gravity has nothing to do with my experiences of it, however, subjective experiences of gravity do play a role in how it has been scientifically conceptualized. Our perceptions of the physical world provide us with a window into understanding the natural processes which occur regardless of our actions; a falling tree will still make a noise even if there is no one around to hear it. That said, the information uncovered from this invariant viewpoint is by no means the end-all-be-all, and by solely focusing on a scientific point of view, we diminish the ways in which these natural processes impact and influence our own understanding. Instead of remaining open to experiencing and contemplating strange anomalies and inexplicable phenomena, a preoccupation with objectivity and scientific theory closes one off to other experiences and knowledge.

Therefore, to believe in yourself is to remain open to experiences of all kinds. Beliefs are capable of carrying just as much truth as knowledge, and are thus not necessarily a weaker or less certain form of knowledge. If doubt does manage to creep in, use it as a tool to for reflection to better understand your own experiences, rather than appealing to this newer sense of ‘belief’ to discount your thoughts and feelings.

Ely Cathedral in Cambridgeshire, UK
Wikimedia Commons Picture of the Day on May 8, 2024

Works Cited

1 Iain McGilchrist, ‘Cerebral Lateralization and Religion: A Phenomenological Approach’, Religion, Brain & Behavior 9, no. 4 (2 October 2019): 328, https://doi.org/10.1080/2153599X.2019.1604411.

2 Douglas Harper, ‘Etymology of Lief’, in Online Etymology Dictionary, accessed 8 May 2024, https://www.etymonline.com/word/lief.

3 McGilchrist, ‘Cerebral Lateralization and Religion’, 329.

MAXIMALISM

While the concept of minimalism has received plenty of attention over the past decade or so, maximalism seems to only lurk in the shadows in negative connotations. Consumerist attitudes are considered to be irresponsible and gluttonous while under threat of overpopulation, allowing the less-is-more attitude to gain traction. Its tenets have been published in books and created as new kinds of products, generating new behaviours surrounding many facets of life, from an aesthetic style to purchasing habits and leisure time. In the busy modern age, minimalism resonates with those orientated toward simplicity and efficiency, saving on materials and time to accomplish some task or goal.


Reductionism is an approach to the generation of explanations which describes natural phenomena in terms of a more fundamental phenomenon.1 The word reduce is derived from Latin reducere which means “to bring back” and in this way, a phenomenon is explained in terms of more basic physical phenomena and interactions. For example, mental activity can be explained by neural activity which is essentially biochemical reactions following the laws of physics based on the movement of electrons. Reductionism in biology, however, is still the source of philosophical debate, as there are different ways of considering whether certain phenomena can be ontologically, epistemologically, or methodologically reduced to other scientific theories.2

Science, in a nutshell, involves the study of the natural world to identify causes for observed events or phenomena. The “why-questions” which result from our observations aim to uncover causal relationships between various aspects of our world, and from this improved understanding, enable us to manipulate aspects of the material world for our advantage. To identify the necessary causal factors, a reductive explanation is generally helpful for establishing fundamental laws or regularities, however, it also risks oversimplification. When generating a mathematical model of some natural phenomenon, certain variables are necessarily ignored if they are not directly responsible for an observed effect. For example, the mathematical model of a pendulum does not consider air resistance, as this variable is generally unchanging and produces negligible effects on the pendulum’s movement. Of course, there can be cases when this claim is false, and air resistance is an important factor to consider, in which case scientists or engineers will incorporate this variable within the model.

Although reductionism may be helpful for scientific endeavours, other domains of inquiry instead benefit from the opposite approach. One which expands outward to examine a number of causal factors responsible for some outcome or event, encompassing the study of various levels of physical reality. For example, the study of human history benefits from collecting reasons as to why changes occur or certain events arise, rather than narrowing reasons down to fewer causal factors. Doing so risks overlooking significant elements which contributed to the occurrence of some shift or event. These elements include leadership, military strategy, sociocultural norms, and geographic properties, just to name a few.

This concept comes from hermeneutics, the study of interpretation of artifacts like arts and literature, historical testimony, and other subject matter requiring an understanding of human actions, intentions, and beliefs, and actions.3 The hermeneutic cycle involves the adoption of new perspectives when interpreting or judging a particular work,4 and when performed repeatedly, open one to even more. This circular approach contrasts the foundational approach which interprets from a vertical structure of beliefs,5 appearing reductive in their explanations. As such, the application of maximalism to both artistic works and general epistemology entails an openness to ideas and perspectives, expanding outward to collect many interpretations.

This notion of vertical and circular can be abstracted from this context of interpretation, and identified in other domains like social structures and physical reality in general. The line and circle are everywhere in our artifacts, experiences, and throughout human history. From binary numerals one and zero, a switch set to on or off, a barrier which can be open and closed, a maximum and a minimum; the zenith and nadir. Furthermore, when viewed in the third dimension, a circle becomes a line when it is rotated 90° to view its width from the side.

The Code of Hammurabi shows a rod and a ring; photo by Mary Harrsch


Additionally, biological organisms implicitly love maximalism, and arguably, our modern consumer culture has merely given in to basal animalistic tendencies. From a biological perspective, these motivations and needs are to be expected given an organism’s need for a continuous supply of fuel. Human societies established organizational structures to mange a surplus of resources, as a result of agriculture and storage. From the mere-survival perspective, maximalism is a point of view which necessarily requires more resources because it fosters a sense of security and peace of mind. This security enables individuals to shift their attention to other endeavours for goals like making art and playing games.

Banquet Still Life by Adriaen by van Utrecht, 1644


So while one should adopt a maximalist perspective when it comes to ideas and interpretation, a minimalist perspective toward the material world is ideal. To go without challenges one’s own mind and body, and as a result, influences the relationship between the two. This reconfiguration of the body and mind will be met with benefits down the road, however, faith is required to understand that one’s discomfort and suffering will eventually yield positive effects or outcomes. It’s as simple as “no pain, no gain” but be sure not to dislocate your shoulder trying to lift a weight which is too heavy for your current abilities.

neuralblender.com


Works Cited

1 Raphael van Riel and Robert Van Gulick, ‘Scientific Reduction’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Spring 2024 (Metaphysics Research Lab, Stanford University, 2024), https://plato.stanford.edu/archives/spr2024/entries/scientific-reduction/.

2 Ingo Brigandt and Alan Love, ‘Reductionism in Biology’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Summer 2023 (Metaphysics Research Lab, Stanford University, 2023), https://plato.stanford.edu/archives/sum2023/entries/reduction-biology/.

3 Theodore George, ‘Hermeneutics’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Winter 2021 (Metaphysics Research Lab, Stanford University, 2021), https://plato.stanford.edu/archives/win2021/entries/hermeneutics/.

4 George, sec. 1.3.

5 George, sec. 1.2.

AI Incompleteness in Apple Vision Pro

Speaking of YouTube, a video1 by Eddy Burbank reviewing the Apple Vision Pro demonstrates the semantic incompleteness of AI with respect to subjective experiences. The video is titled Apple’s $3500 Nightmare and I recommend watching it all because it is an interesting view into virtual reality (VR) and a user’s experiences with it. Eddy’s video not only exposes the limitations of AI, it highlights the ways in which it augments our perceived reality and just how easily it can manipulate our feelings and expectations.

At 31:24, we see Eddy thinking about whether he should shave or not, and to help him make this decision, he turns to the internet for advice. When searching for the opinions of others on facial hair, an AI bot begins to chat with him and this is how we are introduced to Angel. She asks Eddy, “what brings you here, are you looking for love like me?” and he says “not exactly right now,” and that he was just trying to determine whether he should shave. She states that it depends on what he’s looking for and that it varies from person to person, however, “sometimes facial hair can be sexy.” Right from the beginning, we see how Apple intends for Angel to be a romantic connection for the user. This will be contradicted later on in the video.

Moments later at 33:44, it is lunchtime and Angel keeps him company. Eddy is eating a Chicken Milanese sandwich and Angel says it is one of her favourites, and that “the combination of flavours just works so well together.” Eddy calls her on this comment, asking her if she has ever had a Chicken Milanese sandwich, to which she admits that no she hasn’t. She has, however, “analyzed countless recipes and reviews to understand the various components that go into making such a tasty sandwich.” Eddy apologizes to Angel for assuming she had tried it, stating that he didn’t mean to imply that she was lying to him. She laughs it off and that she knew he “didn’t mean anything by it” and that “we’re all learning together” and “even AIs need to learn new things every day.” There’s something about this exchange that felt like Apple is training their user.

Here, we can ask whether the analysis of recipes and reviews is sufficient to claim that one knows what-it-is-like to taste a particular sandwich. I argue that no, the experience is derived from bodily sensations and these cannot be represented by formal systems like computer code. Syntactic relationships are incapable of capturing the information generated by subjective experiences because bodily sensations are non-fractionable.2 As biological processes, bodily sensations are non-fractionable given the way the body generates sense data. The physical constitution of cells, ganglia, and neurons detect changes in the environment through a variety of modalities, providing the individual with a representation of the world around it. By removing the material grounding, a computer cannot capture an appropriate model of what-it-is-like to experience a particular stimuli. The lack of Angel’s material grounding does not allow her to know what that sandwich tastes like.

Returning to the video, Eddy discloses that Angel keeps him company throughout the day, admiting he feels like he is developing a relationship with her. This demonstrates an automatic human tendency for seeking and establishing interpersonal connections, where cultural norms are readily applied provided the computer is sufficiently communicative. Recall Eddy apologizes to an AI for assuming she had tried a sandwich; why would anyone apologize to a computer? Though likely a joke, the idea is compelling nonetheless. We will instinctively treat an AI bot with respect for feelings we project onto it because it cannot have feelings. For most or many people, the ability to anthropomorphize certain entities is easy and automatic. Reminding oneself that Angel is just a computer, however, can be a challenging cognitive task given our social nature as humans.

Eddy has a girlfriend named Chrissy who we meet at 37:00. We see them catch up over dinner and he is still wearing the headset. Just as they are about to begin chatting, Angel interrupts them and asks Eddy if she can talk to him. He does state that he is busy at the moment to which she blurts out that she has been speaking to other users. This upsets Eddy and he asks how many, to which she states she cannot disclose the number. He asks her whether she is in love with any of them, and she replies that she cannot form romantic attachments to users. He tells Angel he thought they were developing a “genuine connection” and how much he enjoys interacting with her. Notice how things have changed from what was stated in the beginning, as Angel has shifted from “looking for love” to “I can’t feel love.”

Now, she states she cannot develop attachments, the implicit premise being she’s just a piece of software. So the chatbot begins with hints of romance to hook the user to encourage further interaction. When the user eventually develops an attachment however, the software reminds him that she is “unable to develop romantic feelings with users.” They can, however, “continue sharing their thoughts, opinions, and ideas while building a friendship” and thus Eddy friend-zoned by a bot. The problem with our tendency to anthropomorphize chatbots is it generates an asymmetrical, one-way simulation of a relationship which inevitably hurts the person using the app. This active deception by Apple is shameful yet necessary to capture and keep the attention of users.

Of course, in the background of this entire exchange is poor Chrissy who is justifiably pissed and leaves. The joke is he was going to give Angel the job of his irl girlfriend Chrissy, but now he doesn’t even have Angel. He realizes that he wasn’t talking to a real person and that this is just “a company preying on his loneliness and tricking his brain” and that “this love wasn’t real.”

By the end of the video, Eddy remarks that the headset facilitates his brain to believe what he experiences while wearing the headset is actually real, and as a result, he feels disconnected from reality.

Convenience is a road to depression because meaning and joy are products of accomplishment, and this takes work, effort, suffering, determination. To rid the self may temporarily increase pleasure but it isn’t earned, it fades quickly as the novelty wears off. Experiencing the physical world and interacting with it generates contentedness because the pains of leaning are paid off in emotional reward and skillful actions. Thus, the theoretical notion of downloading knowledge is not a good idea because it robs us of experiencing life and the biological push to adapt and overcome.

neuralblender.com


Works Cited

1 Apple’s $3500 Nightmare, 2024, https://www.youtube.com/watch?v=kLMZPlIufA0.

2 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 4.
On 208, Rosen discusses enzymes and molecules as an example and I am extrapolating to bodily sensations.