Category: Philosophy of Mind

Semantics and Syntax

Another missing puzzle piece has come to my attention. It has been difficult to articulate why robots like iCub don’t understand the world around them, as it seems its body provides sensory data to ground the meanings of words like ‘cup’ and ‘shovel’. The cameras which operate as its eyes take in visual information to be associated with language, and yet Dr. Haikonen repeatedly stresses that they do not understand what the words mean. Computers, including the one controlling iCub’s body, only work with syntax, or the rules which govern how sentences are formed. As such, incoming sensory data doesn’t really ground the words it learns because its sensory data just consists of more symbols; it’s just symbols all the way down. The 1’s and 0’s which constitute its sensory data do not contain any semantic information about words, since these aren’t experiences but simply representations of experiences.

Aren’t bodily sensations just neural representations of external stimuli, and thus consisting of representations made up of binary values? Perhaps, but animal physiology is physically and functionally arranged in a manner which generates meaning; what does the bee sting mean to me, as a living body capable of being damaged and ultimately dying? For a robot like iCub, damage means nothing, it doesn’t care whether its arm is removed or it gets hit in the head. Its architecture doesn’t allow for meaning to be generated, it’s just a computer that looks like a human. Could iCub’s architecture be modified in such a way which includes pain signals to generate meaning? Arguably no, for reasons I will now attempt to articulate. I’m still working on a satisfactory explanation.

The passage which caught my attention is from Dr. Haikonen’s book The Cognitive Approach to Conscious Machines. I will provide the block quote because summarizing it would remove its flavour, an essence which helps illustrate the issue I’ve been wrestling with.

I make here a bold generalization and claim that syntax cannot be completely separated from semantics. Every now and then vertical grounding to inner imagery and percepts of the external world is needed in order to resolve the meaning of a sentence. Therefore proper perception of external world entities and their relationships and the formation of respective inner representations are absolutely necessary. A syntactic sentence is a structured description of a given situation. If the system is not able to produce and properly bind all the required percepts then it will not be able to produce a proper verbal description either and vice versa, the system will not be able to decode the respective sentence. Artificially imposed rules alone will not solve every problem here.1

To clarify that last sentence, this is the case due to reasons Rosen discusses, that formal systems, in this case syntax, create an abstraction from the natural systems referred to in semantics. Rules alone do not completely represent the natural systems they aim to model. As Haikonen states, syntax provides a structured description of a situation, where these situations are necessarily part of natural systems. By “vertical grounding,” Haikonen means the associations between percepts or sensory information and the words used to describe them.

Immediately I can see that indeed, semantics cannot be completely represented by syntax or formal rules, given my familiarity with Rosen’s explanation in Anticipatory Systems. I can also see, however, a hoard of philosophy of language professors taking issue with this claim. Which ones I do not know, I only know one such professor and he may not be sure whether the generalization is true, or in which cases it might be false. I’ll have to ask him and see what he says. Either way, it is a bold claim and a difficult one to explain; Rosen appeals to Category Theory from pure mathematics to do so, but for those who are uninterested or put off by mathematical theory, it might not provide a very compelling explanation.

Computerized robots like iCub are rule-based structures all the way down, and even if they were to be provided with a means to care about its own body and survival, its interest would remain a simulation. The numerical representations which generate its “experiences” are fundamentally distinct from the analogue representations generated by the human body. By analogue, I mean the various physiological systems which give rise to phenomenal experience. After all, feelings of hunger may be represented by neuronal activity, but they are ultimately generated from the hormone ghrelin. Since hormones are chemical messengers carried throughout the body via the bloodstream, their functionality is distinct from neural networks.2

The reason is because analogue signals are continuous streams of information,3 where continuous refers to the numerical values between whole integers, such as infinitesimal fractions or decimal values. In contrast, modern computers use symbolic or representational methods to generate behaviours, where digital channels pass streams of information which are discrete without intermediate values between whole integers.4 Consequently, digital machines count quantities while analogue machines measure quantities.5 The distinction is significant, and though the all-or-nothing firing patterns of neurons can be represented by binary values, the body and its phenomenal experiences cannot be fully reduced to a collection of neural signals.

So what is this missing piece I stumbled upon? “If the system is not able to produce and properly bind all the required percepts then it will not be able to produce a proper verbal description either and vice versa…” The words and sentences we say have meaning because the rules of language are coupled with the meanings of the words we use. iCub may state “the truck is red” but it doesn’t understand because its version of ‘red’ is without semantic content. The word ‘red’ is not fully grounded because the sensations it appeals to are still symbols made up of discrete values. Instead, analogue signals are required to fully capture what-it-is-like to experience some stimulus, where these signals influence the system’s self-organizing behaviour for the sake of continued survival.

neuralblender.com

Works Cited

1 Pentti O. Haikonen, The Cognitive Approach to Conscious Machines (UK: Imprint Academic, 2003), 238–39.

2 Mark Andrew Krause et al., An Introduction to Psychological Science: Modeling Scientific Literacy (Pearson Education Canada, 2014), 102.

3 John Johnston, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (The MIT Press, 2008), 28, https://doi.org/10.7551/mitpress/9780262101264.001.0001.

4 Johnston, 28.

5 Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society, 2d ed. rev. (Garden City, NY: Doubleday, 1954), 64.

Integrated Information Theory of Consciousness

The Integrated Information Theory (IIT) of Consciousness is a theory originally proposed by Giulio Tononi which has since been further developed by other researchers, including Christof Koch. It aims to explain why some physical processes generate subjective experiences while others do not, and why certain regions of the brain, like the neocortex, are associated with these experiences.1 Evidently, he appeals to information theory, a technical domain which uses mathematics to determine the amount of entropy or uncertainty within a process or system.2 Less uncertainty means more information, where complex systems like humans and animals contain more information than simpler systems like an ant or a camera. Relationships between information are generated from a “complex of elements”3 and when a number of relationships are established, we see greater amounts of integration.4 Tononi states “…to generate consciousness, a physical system must be able to discriminate among a large repertoire of states (information) and it must be unified; that is, it should be doing so as a single system, one that is not decomposable into a collection of causally independent parts (integration).”5 This measure of integration is symbolized by the Greek letter Φ (phi) because the line in the middle of the letter stands for ‘information’ and the circle around it indicates ‘integration’.6 More lines and circles!

In addition to considering the quantity of information generated by the system, IIT also considers the quality of this information generated by its mechanisms. Both attributes determine the quality of an experience. This experience can be conceived of as a “shape” in a qualia space made up of elements and the connections between them.7 Each possible state of the system is considered an axis of the qualia space, each of which is associated with a probability of actually existing as that state. A particular quale consists of a shape in this state space, specifying the quality of an experience. Therefore, viewing a red object results in a particular state and shape in the qualia space, a mathematical object supposedly representing neuronal activity.8 As such, Tononi claims that his theory provides a way to describe phenomenology in terms of mathematics.9 Sure, however, this doesn’t really explain much about consciousness or qualia, it just provides a mathematical description of it.

In later publications, he attempts to clarify this theory a bit further. Rather than appealing to activity in the brain, his theory “starts from the essential phenomenal properties of experience, or axioms, and infers postulates about the characteristics that are required of its physical substrate.”10 The reason is because subjective experiences exist intrinsically and are structured in terms of cause-and-effect in some physical substrate like a brain. Experiences, therefore, are identical to a conceptual structure, one expressible in mathematics.11 By starting from axioms, which are self-evident essential properties, IIT “translates them into the necessary and sufficient conditions” for the physical matter which gives rise to consciousness and experience.12

Not satisfied? I hear ya barking, big dog. When I first heard about it, I was intrigued by the concept but was ultimately unimpressed because it doesn’t explain anything. What do we mean by ‘explain’? It’s one of those philosophically dense concepts to fully articulate, however, a dictionary definition can give us a vague idea. By ‘explain’, we mean some discussion which gives a reason or cause for something, demonstrating a “logical development or relationships of” the phenomenon in question,13 usually in terms of something else. For example, the reason it is sunny right now is because the present cloud coverage is insufficient for dampening the light coming from the sun. Here, ‘sunny’ is explained in terms of cloud coverage.

We are not alone in our dissatisfaction with IIT. On Sept. 16th 2023, Stephen Fleming et al. published a scathing article calling IIT pseudoscience.14 The reason is because IIT is “untestable, unscientific, ‘magicalist’, or a ‘departure from science as we know it’” because the theory can apply to many different systems, like plants and lab-generated organoids.15 They state that until the theory is empirically testable, the label of ‘pseudoscience’ should apply to prevent misleading the public. The implications of IIT can have real-world effects, shaping the minds of the public about which kinds of systems are conscious and which are not, for example, robots and AI chatbots.

One of the authors of this article would go on to publish a longer essay on the topic to a preprint server on Nov. 30th that same year. Keith Frankish reiterates the concerns of the original article and further explains the issues surrounding IIT. To summarize, the axiomatic method IIT employs is “an anomalous way of doing science” because the axioms are not founded on nor supported by observations.16 Instead, they appeal to introspection, an approach which has historically been dismissed or ridiculed by scientists because experiences cannot be externally verified.The introspective approach is one which belongs to the domain of philosophy, more akin to phenomenology than to science. Frankish grants that IIT could be a metaphysical theory, like panpsychism, but if this is the case, it is misleading to call it science.17 If IIT proponents insist that it is a science, well, then it becomes pseudoscience.

As a metaphysical theory, I’m of the opinion that it isn’t all that great. It doesn’t add anything to our understanding because the mathematical theory is rather complex and doesn’t provide a method for associating itself with scientific domains like neuroscience or evolutionary biology. It attempts to, however, it’s explanatorily unsatisfactory.

That said, the general idea of “integrated information” for consciousness isn’t exactly wrong. My perspective on consciousness, based on empirical data, is that consciousness is a property of organisms, not of brains. There are no neural correlates of consciousness because it emerges from the entire body as a self-organizing whole. It can be considered a Gestalt which arises from all of our sensory mechanisms and attentional processes for the sake of keeping the individual alive in dynamic environments. While the contents of subjective experience are private and unverifiable to others, that doesn’t make them any less real than the sun or gravity. They can be incorrect, as in the case of illusions and hallucinations, however, the experiences as experiences are very real to the subject experiencing them. They may not be derived from sense data portraying some element of the natural world, as in the cases of visual illusions, however, there is nonetheless some physical cause for them as experiences. For example, the bending of light creates a mirage; the ingestion of a substance with psychoactive effects creates hallucinations. The experiences are real, however, their referents may not exist as an aspect of the external world, and may just be an artifact of other neural or physiological processes.

I’ve been thinking about this for many years now, and since the articles calling IIT pseudoscience were published, have been thinking some more. Hence why I’m a bit “late to the game” on discussing it. Anyway, once I graduate from the PhD program, I’ll begin work on a book which explains my thoughts on consciousness in further detail, appealing to empirical evidence to back up my claims. I have written an extensive discussion on qualia, accompanied by a video, aiming to present a theory of subjective experiences from a perspective which takes scientific findings into consideration.

My sense is that, for a long time, our inability to resolve the issues surrounding qualia and consciousness was a product of academia. We’re so focused on specialization that the ability to incorporate findings and ideas from other domains is lost on many individuals, or is just not of interest to them. I hope we are slowly getting over this issue, especially with respect to consciousness, as philosophy of mind has a lot to learn from other domains like neuroscience, psychology, cognitive science, and evolutionary biology, just to name a few.

Consciousness is a property of organisms like humans and animals for detecting features of the environment. It comes in degrees; a sea sponge is minimally conscious, while a gecko is comparatively more aware of its surroundings. Many birds and mammals demonstrate a capacity for relatively high-level consciousness and thus intelligence. Obviously humans are at the top of this pyramid, given our mastery over aspects of our world as seen in our technological advancements. Consciousness, as an organismic-level property, emerges from the coordination and integration of various physiological subsystems, from systems of organs to specific organs and tissues, all the way down to cells and cellular organelles. It is explained by the interactions of these subsystems, however, cannot be causally reduced to them. Though the brain clearly plays an important role for consciousness and subjective experiences, it is a mistake to be looking for causal properties of consciousness in the brain, like a region or circuit. Consciousness is an emergent property of bodies embedded within a wider physical environment.

From this perspective, we can and have developed an analogue of consciousness for machines, as per the work18 of Dr. Pentti Haikonen. The good news is that because this machine doesn’t use a computer or software, you don’t need to worry about current AIs becoming conscious and “taking over the world” or outsmarting humans. It physically isn’t possible, and the recent discussions I’ve posted aim to articulate ontologically why this is the case. You ought to be far more afraid of people and companies, as explained by this excellent video from the YouTube channel Internet of Bugs.

Lastly, I want to extend a big Thank You to Dr. John Campbell for inspiring me to work on this explanation of consciousness, as per the helpful comment he left me on my qualia video. I recommend following Dr. Campbell on YouTube, he is a fantastic researcher and educator, in addition to being an honest, critically-thinking gentleman who covers many interesting topics related to healthcare.

A Scholar in his Study by Thomas Wijck (1616 – 1677)

Works Cited

1 Giulio Tononi, “Consciousness as Integrated Information: A Provisional Manifesto,” The Biological Bulletin 215, no. 3 (December 1, 2008): 216, https://doi.org/10.2307/25470707.

2 Tononi, 217; Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 17, https://doi.org/10.7551/mitpress/11810.001.0001; C. E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal 27, no. 3 (July 1948): 393, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.

3 Tononi, “Consciousness as Integrated Information,” 217.
4 Tononi, 219.
5 Tononi, 219.
6 Tononi, 220.
7 Tononi, 224.
8 Tononi, 228.
9 Tononi, 229.

10 Giulio Tononi et al., “Integrated Information Theory: From Consciousness to Its Physical Substrate,” Nature Reviews Neuroscience 17, no. 7 (July 2016): 450, https://doi.org/10.1038/nrn.2016.44.

11 Tononi et al., 452.
12 Tononi et al., 460.

13 “Explain,” in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed August 10, 2024, https://www.merriam-webster.com/dictionary/explain.

14 Stephen Fleming et al., “The Integrated Information Theory of Consciousness as Pseudoscience” (PsyArXiv, September 16, 2023), https://doi.org/10.31234/osf.io/zsr78.

15 Fleming et al., 2.

16 Keith Frankish, “Integrated Information Theory: Pseudoscience or Appropriately Anomalous Science?” (OSF, November 30, 2023), 1–2, https://doi.org/10.31234/osf.io/uscwt.

17 Frankish, 5.

18 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007); Pentti O Haikonen, “Qualia and Conscious Machines,” International Journal of Machine Consciousness, April 6, 2012, https://doi.org/10.1142/S1793843009000207; Pentti O Haikonen, Consciousness and Robot Sentience, vol. 2, Series on Machine Consciousness (World Scientific, 2012), https://doi.org/10.1142/8486.

Belief is a Relationship

Like many philosophy students, the list of books I would like to read is quite long and only continues to grow. One of them is Iain McGilchrist’s The Master and His Emissary which discusses the difference between brain hemispheres and the specializations of each. He’s also written papers on the subject, one of which being ‘Cerebral Lateralization and Religion: a phenomenological approach’ which I have read. Overall, it’s very interesting but there is a particular section which struck me as rather significant for epistemology and mental health.

On page 328, under the heading ‘Knowledge, belief, and truth’, McGilchrist discusses the different kinds of knowledge handled by each hemisphere. While the left hemisphere specializes in collecting bits and pieces of information from a “general, impersonal, … and disengaged stance,” the right hemisphere specializes in uncertain, personal, and experiential knowledge which “resists generalization.”1 In this case, “the whole is not best understood by summing the parts.” He mentions this distinction is similar to the difference between the French terms savior and connaître, as although both of these terms directly translate to ‘knowledge’, the kind of knowledge they refer to is unique. One refers to an experiential knowledge while the other refers to propositional knowledge. The German language also notes this distinction with the words wissen and kennen.

McGilchrist goes on to explain how ‘belief’ is also subject to this differentiation. Though many use this word to refer to cognition and propositional knowledge, the etymological root of the term uncovers a kind of experiential knowledge. Particularly, ‘lief’ in Middle English describes a person who is “beloved, esteemed, dear”2 or, as McGilchrist states, as “someone in whom one believed.” Similarly, in German, the word ‘lieben’ means “to love.” Furthermore, the French word for ‘belief’ is croire, as derived from the Latin term credere, meaning to “entrust to the care of.” McGilchrist states that “belief is about a relationship” where the “believer needs to be disposed to love, but the believed-in needs to inspire another’s belief.” This cannot be determined in advance but instead “emerges through commitment and experience.”

In contemporary uses, ‘belief’ often indicates an uncertainty about truth, however, this reconceptualization is a relatively recent one. McGilchrist states that “belief does imply truth” and appeals to the German term treu which means ‘faithful’ and is also related to ‘trust’. The relationship he points out here is one characterized by trusting another, where one believes in another, and as such, trusts in them. Truth and belief are relational, deriving value from the context in which they are used or appealed to, in addition to being embodied and actively involving commitment. Today, however, we often think of ‘truth’ and ‘belief’ as detached and disembodied, where ‘truth’ is independent of our own selves, “immutable and certain.” McGilchrist characterizes this shift as an understanding rooted in right-hemispheric thinking to a left-hemispheric one, and he warns that “belief and truth cannot always be achieved by simply sitting back and waiting passively for information to accumulate.”3 Instead, “some truths become understandable only when we have made a move to meet them.” [emphasis added]

So to summarize, both ‘knowledge’ and ‘belief’ come in two different flavours: one which is propositional and cognitive, and one which is experiential and relational. ‘Belief’ is not a weaker version of knowledge but an outcome of an activity grounded in love and acceptance. It is relational, as these feelings or dispositions arise from the interaction between the person who believes and the thing they believe in, uncovering or identifying truths from this committed relationship. This thing to be believed in may be another person, however, it also applies to the self. By accepting and appreciating your own thoughts and feelings as worthy of attention and consideration, we build up an understanding of ourselves as individuals, allowing us to realize our potential. If I believe I will graduate, I trust that I will take the steps necessary to complete my project and sufficiently defend it. I trust myself because I have accepted my strengths and weaknesses, allowing me to push forward when challenges arise.

In spiritual or religious contexts, this relationship is oriented outward to a domain or entity residing beyond the material world, however, it can also refer to a relationship to oneself. In Gnostic traditions, generally speaking, individuals come to know a divine or non-material domain only when one turns their attention inward to reflect on experience and understanding. In this way, a weaker form of ‘belief’, perhaps glibly characterized by a blind faith in some divine force or entity, can be strengthened by relying on one’s own knowledge and understanding to form a bridge into the world of the immaterial and unknown. By going through oneself, individuals can access a world beyond the physically experienced one to uncover truths which would otherwise be occluded by the physical world and its various authorities. Occult knowledge may be purposefully hidden, however, it seems this may simply reflect the reality of where this knowledge naturally resides. To reach this domain, the path one must take is through a healthy relationship with the self, where the beginning of this path is in acceptance and the analysis of one’s experiences and understanding.

The reason I wanted to discuss this segment from McGilchrist’s paper is because it highlights a fallacy in our modern, scientific world-view, one which suggests that truth is to be found from without. Certainly there are instances where this is the case, as the rate of gravity has nothing to do with my experiences of it, however, subjective experiences of gravity do play a role in how it has been scientifically conceptualized. Our perceptions of the physical world provide us with a window into understanding the natural processes which occur regardless of our actions; a falling tree will still make a noise even if there is no one around to hear it. That said, the information uncovered from this invariant viewpoint is by no means the end-all-be-all, and by solely focusing on a scientific point of view, we diminish the ways in which these natural processes impact and influence our own understanding. Instead of remaining open to experiencing and contemplating strange anomalies and inexplicable phenomena, a preoccupation with objectivity and scientific theory closes one off to other experiences and knowledge.

Therefore, to believe in yourself is to remain open to experiences of all kinds. Beliefs are capable of carrying just as much truth as knowledge, and are thus not necessarily a weaker or less certain form of knowledge. If doubt does manage to creep in, use it as a tool to for reflection to better understand your own experiences, rather than appealing to this newer sense of ‘belief’ to discount your thoughts and feelings.

Ely Cathedral in Cambridgeshire, UK
Wikimedia Commons Picture of the Day on May 8, 2024

Works Cited

1 Iain McGilchrist, ‘Cerebral Lateralization and Religion: A Phenomenological Approach’, Religion, Brain & Behavior 9, no. 4 (2 October 2019): 328, https://doi.org/10.1080/2153599X.2019.1604411.

2 Douglas Harper, ‘Etymology of Lief’, in Online Etymology Dictionary, accessed 8 May 2024, https://www.etymonline.com/word/lief.

3 McGilchrist, ‘Cerebral Lateralization and Religion’, 329.

Artifacts

What does it mean to call something an example of “artificial intelligence” (AI)? There are a few different ways to approach this question, one of which includes examining the field to identify an overarching definition or set of themes. Another involves considering the meanings of the words ‘artificial’ and ‘intelligence’, and arguably, doing so enables the expansion of this domain to include new approaches to AI. Ultimately, however, even if these agents one day exhibit sophisticated or intelligent behaviours, they nonetheless continue to exist as artifacts, or objects of creation.

The term artificial intelligence was conceived by computer scientist John McCarthy in 1958, and the purported reason he chose the term was to distinguish it from other domains of study.1 In particular, the field of cybernetics which involves analog or non-digital forms of information processing, and automata theory as a branch of mathematics which studies self-propelling operations.2 Since then, the term ‘artificial intelligence’ has been met with criticism, with some questioning whether it is an appropriate term for the domain. Specifically, Arthur Samuel was not in favour of its connotations, according to computer scientist Pamela McCorduck in her publication on the history of AI.3 She quotes Samuel as stating “The word artificial makes you think there’s something kind of phony about this, or else it sounds like it’s all artificial and there’s nothing real about this work at all.”4

Given the physical distinctions between computers and brains, it is clear that Samuel’s concerns are reasonable, as the “intelligence” exhibited by a computer is simply a mathematical model of biological intelligence. Biological systems, according to Robert Rosen, are anticipatory and thus capable of predicting changes in the environment, enabling individuals to tailor their behaviours to meet the demands of foreseeable outcomes.5 Because biological organisms depend on specific conditions for furthering chances of survival, they evolved ways to detect these changes in the environment and respond accordingly. As species evolved over time, their abilities to detect, process, and respond to information expanded as well, giving rise to intelligence as the capacity to respond appropriately to demanding or unfamiliar situations.6 Though we can simulate intelligence in machines, the use of the word ‘intelligence’ is metaphorical rather than literal. Thus, behaviours exhibit by computers is not real or literal ‘intelligence’ because it arises from an artifact rather than from biological outcomes.

An artifact is defined by Merriam-Webster as an object showing human workmanship or modification, as distinguished from objects found in nature.7 Etymologically, the root of ‘artificial’ is the Latin term artificialis or an object of art, where artificium refers to a work of craft or skill and artifex denotes a craftsman or artist.8 In this context, ‘art’ implies a general sense of creation and applicable to a range of activities including performances as well as material objects. The property of significance is its dependence on human action or intervention: “artifacts are objects intentionally made to serve a given purpose.”9 This is in contrast to unmodified objects found in nature, a distinction first identified by Aristotle in Metaphysics, Nicomachean Ethics, and Physics.10 To be an artifact, the object must satisfy three conditions: it is produced by a mind, involves the modification of materials, and is produced for a purpose. To be an artifact, an object or entity must meet all three criteria.

The first condition states the object must have been created by a mind, and scientific evidence suggests both humans and animals create artifacts.11 For example, beaver dams are considered artifacts because they block rivers to calm the water which creates ideal conditions for a building a lodge.12 Moreover, evidence suggests several early hominid species carved handaxes which serve social purposes as well as practical ones.13 By chipping away at a stone, individuals shape an edge into a blade which can be used for many purposes, including hunting and food preparation.14 Additionally, researchers have suggested that these handaxes may also have played a role in sexual selection, where a symmetrically-shaped handaxe demonstrating careful workmanship indicates a degree of physical or mental fitness.15 Thus, artifacts are important for animals as well as people, indicating the sophisticated abilities involved in the creation of artifacts is not unique to humans.

Computers and robots are also artifacts given that they are highly manufactured, functionally complex, and created for a specific purpose. Any machine or artifact which exhibits complex behaviour may appear to act intelligently, however, the use of ‘intelligent’ is necessarily metaphorical given the distinction between artifacts and living beings. There may one day exists lifelike machines which behave like humans, however, any claims surrounding literal intelligence must demonstrate how and why that is; the burden of proof is theirs to produce. An argument for how a man-made object sufficiently models biological processes is required, and even then, remains a simulation of real systems.

If the growing consensus in cognitive science indicates individuals and their minds are products of interactions between bodily processes, environmental factors, and sociocultural influences, then we should to adjust our approach to AI in response. For robots intending to replicate human physiology, a good first step would be to exchange neural networks made from software for ones built from electrical circuits. The Haikonen Associative Neuron offers a solution to this suggestion,16 and when coupled with the Haikonen Cognitive Architecture, is capable of generating the required physiologicalprocesses for learning about the environment.17 Several videos uploaded to YouTube demonstrate a working prototype of a robot built on these principles, where XCR-1 is able to learn associations between stimuli in its environment, similarly to humans and animals.18 Not only is it a better model of animal physiology than robots relying on computer software, the robot is capable of performing a range of cognitive tasks, including inner speech,19 inner imagery,20 and recognizing itself in a mirror.21

So, it seems that some of Arthur Samuel’s fears have been realized, considering machines merely simulate behaviours and processes identifiable in humans and animals. Moreover, the use of ‘intelligence’ is metaphorical at best, as only biological organisms can display true intelligence. If an aspect of Samuel’s concerns related to securing funding within his niche field of study, and its potential to fall out of fashion, he has no reason to worry. Unfortunately, Samuel passed away in 199022 so he would not have had a chance to see the monstrosity that AI has since become.

Even if these new machines were to become capable of sophisticated behaviours, they will always exist as artifacts, objects of human creation and designed for a specific purpose. The etymological root of the word ‘artificial’ alone provides sufficient grounds for classifying these robots and AIs as objects, however, as they continue to improve, this might become difficult to remember at times. To avoid being deceived by these “phony” behaviours, it will become increasingly important to understand what these intelligent machines are capable of and what they are not.

neuralblender.com


Works Cited

1 Nils J. Nilsson, The Quest for Artificial Intelligence (Cambridge: Cambridge University Press, 2013), 53, https://doi.org/10.1017/CBO9780511819346.

2 Nilsson, 53.

3 Nilsson, 53.

4 Pamela McCorduck, Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, [2nd ed.] (Natick, Massachusetts: AK Peters, 2004), 97; Nilsson, The Quest for Artificial Intelligence, 53.

5 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 7.

6 ‘Intelligence’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 5 March 2024, https://www.merriam-webster.com/dictionary/intelligence.

7 ‘Artifact’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 17 October 2023, https://www.merriam-webster.com/dictionary/artifact.

8 Douglas Harper, ‘Etymology of Artificial’, in Online Etymology Dictionary, accessed 14 October 2023, https://www.etymonline.com/word/artificial; ‘Artifact’.

9 Lynne Rudder Baker, ‘The Ontology of Artifacts’, Philosophical Explorations 7, no. 2 (1 June 2004): 99, https://doi.org/10.1080/13869790410001694462.

10 Beth Preston, ‘Artifact’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Winter 2022 (Metaphysics Research Lab, Stanford University, 2022), https://plato.stanford.edu/archives/win2022/entries/artifact/.

11 James L. Gould, ‘Animal Artifacts’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 249.

12 Gould, 262.

13 Steven Mithen, ‘Creations of Pre-Modern Human Minds: Stone Tool Manufacture and Use by Homo Habilis, Heidelbergensis, and Neanderthalensis’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 298.

14 Mithen, 299.

15 Mithen, 300–301.

16 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007), 19.

17 Pentti O Haikonen, Consciousness and Robot Sentience, 2nd ed., vol. 04, Series on Machine Consciousness (WORLD SCIENTIFIC, 2019), 167, https://doi.org/10.1142/11404.

18 ‘Pentti Haikonen’, YouTube, accessed 6 March 2024, https://www.youtube.com/@PenHaiko.

19 Haikonen, Consciousness and Robot Sentience, 182.

20 Haikonen, 179.

21 Robot Self-Consciousness. XCR-1 Passes the Mirror Test, 2020, https://www.youtube.com/watch?v= WE9QsQqsAdo.

22 John McCarthy and Edward A. Feigenbaum, ‘In Memoriam: Arthur Samuel: Pioneer in Machine Learning’, AI Magazine 11, no. 3 (15 September 1990): 10, https://doi.org/10.1609/aimag.v11i3.840.

Chaos in the System

As an argument against iCub’s ability to understand humans, I wanted to appeal to the work of Robert Rosen because I think it makes for a compelling argument about AI generally. To accomplish this, however, my project would start to go in a new direction which renders it less cohesive overall. Instead, the Rosen discussion is better served as a stand alone project because there is a lot of explaining yet to do, and maybe some objections that need discussing as well. This will need to wait but I can at least upload the draft for context on the previous post. There are a few corrections I still need to make but once it’s done, I will update this entry.

Instead, I will argue that the iCub is not the right system for social robots because its approach to modelling emotion is unlike the expression of emotions in humans. As a result, it cannot experience nor demonstrate empathy in virtue of the way it is built. The cognitive architecture used by iCub can recognize emotional cues in humans, however, this information is not experienced by the machine. Affective states in humans are bodily and contextual, but in iCub, they are represented by computer code to be used by the central processing unit. This is the general idea but I’m still working out the details.

That said, there is something interesting in Rosen’s idea about the connection between Gödel’s Incompleteness Theorem and the incompleteness between syntax and semantics. In particular, what he identifies is the problems generated from self-reference which leads the system to produce an inconsistency given its rule structure. The formal representation of an external referent, as an observable of a natural system, contains only the variables relevant for the referent within the formal system. Self-reference requires placing a variable within a wider scope, one which must be provided in the form of a natural system. Therefore, an indefinite collection of formal systems is required to capture a natural phenomenon. Sometimes a small collection is sufficient, while other times, systems are so complex that a collection of formal systems is insufficient for fully accounting for the natural phenomenon. Depending on the operations to be performed on the referent, it may break the system or lead to erroneous results. The chatbot says something weird or inappropriate.

In December, I presented this argument at a student conference and made a slideshow for it. Just a note: on the second slide I list the titles of my chapters, and because I won’t be pursuing the Rosen direction, the title of Chapter 4 will likely change. Anyway, the reading and writing on Rosen has taken me on a slight detour but a worthwhile one. Now, I need to begin research on emotions and embodiment, which is also interesting and will be useful for future projects as well. The light at the end of the tunnel has dimmed a bit but it’s still there, and my eyes have adjusted to the darkness so it’s fine.

This shift in directions makes me think about the relationship between chaos and order, and systems that swing between various states of orderliness. Without motion there would be rest and stagnation, so as much as change can be challenging, it can bring new opportunities. There is a duality inherent in everything, as listed as one of 7 Hermetic Principles. If an orderly, open system is met with factors which disrupts or disorganizes functioning, the system must undergo some degree of reorganization or compensation. The explanatory powers of the 7 Principles are not meant to relate to the external world in the way physics does, but relate to one’s perspective of events in the outside world. If one can shift their perspective accordingly, they operate as axioms for sense-making, their reality pertaining more to epistemology than ontology. We can be sceptical as to how these Principles manifest in the physical universe while feeling their reality in our lived experience of the world. They are to be studied from within rather than from without, and are thus more aligned with phenomenology than the sciences.

Metaphorically speaking, chaos injected into any well-ordered system has the potential to severely damage or disrupt it, requiring efforts to rebuild and reorganize to compensate for the effects of change. The outcome of this rebuilding process can be further degradation and maybe even collapse, however, it can lead to growth and better outcomes than if the shift had not occurred. It all depends on the system in question and the factors which impacted it, and probably the specific context in which the situation occurred, but it might depend on the system in question. Anyway, we substitute the idea of ‘chaos’ for ‘energy’ as movement or potential, thus establishing a connection to ‘light’ as a type of energy. Metaphorically, ‘light’ is also associated with knowledge and beneficence, so if the source of chaos is intentional and well-meaning, favourable changes can occur and thus a “light bringer” or “morning star” can be associated with positive connotations. Disrupting a well-ordered system without knowledge or a plan or good reasons is more likely to lead to further disorder and dysfunction, leading to negative or unfavourable outcomes. In this way, Lucifer can be associated with evil or descent.

This kind of exercise can help us make sense of our experiences and understanding, but they also give us into a window into the past and how other people may think. Myth and legend from cultures all over the world portray knowledge in metaphors which inspire those who come upon them for generations since. The metaphysics are not important, it’s the epistemology from the metaphors which can explain aspects of how the world works or why people think certain things or act in certain ways. It exists as poetry which needs interpreting and there is room for multiple perspectives, so not everyone appreciates it which is understandable. It is still valuable work to be done by someone though, and the more people the better.

Rothschild Canticles p. 64r (c. 1300)

★★★

Moving On Up

Given my last post, I should probably explain myself. I still don’t know what I’m doing but maybe simple acceptance isn’t all it’s cracked up to be. We have the power to change our circumstances, so why not give it a go? A saying I often think about is “ships aren’t built to sit in harbours” and while one can avoid risk this way, you also don’t get to see far off lands either.

Time to rebuild. What do I know? I know what I feel; phenomenology is a good place to start. I still stand behind everything I stated regarding qualia. There may be aspects to my hypothesis that might change or there might be something I’m missing, however, to state that the entire idea is wrong is a hastily generated conclusion.

There is probably more to consciousness than can be captured by our current scientific understanding, however, one must tread very carefully when moving in this direction. Figuring out what this involves and how it works is my new pet project and hopefully I can make some headway. I’m not in a rush though.

Here’s the big reveal: I read the CIA document titled Analysis and Assessment of Gateway Process in addition to Itzhak Bentov’s book Stalking the Wild Pendulum. Luckily for us, Thobey Campion has done some very important investigative journalism regarding the missing page 25 from the CIA document; thank you very much for your work Thobey. I strongly encourage you to read the Vice article about it while it’s still available. I have a hunch that this article won’t be around for a long time but hopefully I’m wrong.

I want someone to explain the physics to me like I’m 5 and stick around for a lengthy Q&A session. I want to know how this works in a way that connects to our current understanding of physics. Bentov’s book seems to get about halfway there but doesn’t explain all the details necessary to generate a full explanation of the phenomenon. If you know of anyone who has written about this, please email me because I’m very interested in exploring this further.

Page 25 is truly the most important page in the CIA document because it reiterates a certain truth that serves as the bedrock for creating the Philosopher’s Stone: self-awareness. Unwavering, unfiltered, unapologetic self-awareness.

“It was axiomatic to the mystic philosophers of old that the first step in personal maturity could be expressed in the aphorism: “Know thyself.” To them, the education of a man undertook, as its primary step, achievement of an introverted focus so that he learned what was within himself before attempting to approach the outside world. They rightly assumed that he could not effectively evaluate and cope with the world until he fully understood his personal psychological imbalance. The insights being provided by Twentieth Century psychology in this context through the use of various kinds of personality testing seem to be a revalidation of this ancient intuition. But no personality test, or series of tests, will ever replace the depth and fullness of the perception of self which can be achieved when the mind alters its state of consciousness sufficiently to perceive the very hologram of itself which it has projected into the universe in its proper context as part of the universal hologram in a totally holistic and intuitional way. This would seem to be one of the real promise of the Gateway Experience from the standpoint of its ability to provide a portal through which, based on months if not years of practice, the individual may pass in his search to find self, personal effectuality, and truth in the larger sense.”

The appeal to holograms here might rub some the wrong way, however, I think this has something to do with Kantian metaphysics. Specifically, that everything is just sense data, and while we don’t necessarily need to go full Berkeley, we must always remember that our experiences are simply appearances, not objective data. Where does certainty come from? The synthesis of a first-person perspective and third-person perspective. Do not simply defer to what everyone else says but do not ignore it either.

This I know. As do many others, many (most?) of which have lived before I or Bentov or anyone else around today. What I might add, though, is that it always takes two to tango. Men and women together as fully-developed agents even when it generates a conflict. When done in good faith, the outcome is so much more, so much greater, than either one alone.

Implicit Argument for Qualia

Steven Harnad provides an embodied version of the Turing Test (TT) in Other Bodies, Other Minds by using a robot instead of a computer, calling it the Total Turing Test (TTT). He states that to be truly indistinguishable from a human, artificial minds will require the ability to express embodied behaviours in addition to linguistic capacities (Harnad 44). While the TT implicitly assumes language exists independently from the rest of human behaviour (Harnad 45), the TTT avoids problems arising from this assumption by including a behavioural component to the test (Harnad 46). This is due to our tendency to infer other humans have minds despite the fact individuals do not have direct evidence for this belief (Harnad 45). This assumption can be extended to robots as well, where embodied artificial agents which act sufficiently human will be treated as if it had a mind (Harnad 46). Robots which pass the TTT can be said to understand symbols because these symbols have been grounded in non-symbolic structures or bottom-up sensory projections (Harnad 50–51). Therefore, embodiment seems to be necessary for social agents as they will require an understanding of the world and its contents to appear humanlike.

These sensory projections are also known as percepts or qualia (Haikonen 225), and are therefore required for learning language. While Harnad’s intention may have been to avoid discussing metaphysical properties of the mind, for the sake of discussing the TTT, his argument ends up providing support for the ontological structures involved in phenomenal consciousness. Although I didn’t mention it above, he uses this argument to refute Searle’s concerns about the Chinese Room, and the reason he is successful is due to the fact he is identifying an ontological necessity. Robots which pass the TTT will have their own minds because the behaviours which persuade people to believe this is the case are founded on the same processes that produce this capacity in humans.

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Harnad, Stevan. ‘Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem’. Minds and Machines, vol. 1, no. 1, 1991, pp. 43–54.