Category: Technology

Self-Reference

I am reading Autopoiesis and Cognition by Humberto Maturana and Fransisco Varela for my thesis, and a significant connection has leapt out to me from page 10. This section is written by Maturana, and his fourth point about living systems states:

“Due to the circular nature of its organization a living system has a self-referring domain of interactions (it is a self-referring system), and its condition of being a unit of interactions is maintained because its organization has functional significance only in relation to the maintenance of its circularity and defines its domain of interactions accordingly.”

This passage expands upon the nugget of wisdom supplied by Kurt Gödel as appealed to by Robert Rosen. Recall that Gödel was able to conclude that mathematics is incomplete from the use of self-reference, as a contradiction can be generated within a set of meta-mathematical statements. Although Rosen appeals to syntax and semantics in Anticipatory Systems, the broader sense is about the differences between natural systems and formal systems. My ultimate goal is to articulate this relationship and its implications in more general terms, with a particular focus on comparing AI and machines to humans and animals. So far, I’ve been able to sketch some themes and ideas in relation to Rosen and this relationship, and much more work is required to be able to put into words the ideas which only exist as intuitions. For now, however, I will document the process of how this all comes together because the externalization of ideas will foster their articulation.

Though Rosen appeals to language, language is merely an attempt at portraying elements of the world as understood by its author or speaker. Maturana’s passage is the missing link in a wider explanation of the phenomenon in question. Where does this incompleteness come from? Why is it that AI cannot ontologically compete with human intellect? The answer has to do with scope and the way wholes can be greater than the sum of their parts.

In biology, organisms are made up of various self-organizing processes which aim to support the continued survival of the individual. Although comprised of nested levels of physiological processes, a person is greater than the sum total of his physicality. In some ways, the idea of self is highly complex and philosophically dense, but seen through the lens of biology, the self refers to an individual as contained by its own body. All living things have a boundary for which processes take place inside, delineating it from the rest of its environment as a unit. Arguably, the nervous system evolved to provide individuals with information about its internal and external environments for the sake of continued survival. By responding to changes in the environment, the individual can take actions which mitigate these changes.

Physiological processes can be described by a sequential series of steps or actions taken within some system. In Rosen’s terms, a formal system can be generated from a natural system, however, it generates an abstraction which ignores all but the elements necessary for producing some outcome or end state. For example, when it comes to predicting tomorrow’s temperature, some geological elements will be taken into consideration, such as wind patterns and atmospheric moisture levels, however, other aspects of the Earth can be ignored as they don’t influence how temperatures manifest. Perhaps something related to plate tectonics or spruce tree populations. When scientists generate weather and climate models, they only include variables which impact the systems they are interested in studying. The model, as described by mathematics, can be seen as a set of relations and calculations which provides an output, and in this way, exists as a sequence of steps to be taken. If one were to write out these steps, they’d have something which resembles an algorithm or piece of computer code.

If additional information is required which has not been accounted for the model, it would therefore be inaccessible as it remains beyond the scope of the existing model. In some cases, the model can be expanded to include this variable, say including spruce tree populations, however, Rosen’s point is that no amount of augmentation will provide a model which completely represents the natural system in question. It will always contain aspects which cannot be properly accounted for by formal systems, and the example he uses is semantics. This becomes apparent with indexicals, as ‘me’ or ‘today’ is rather difficult to articulate without appealing to the wider context or situation where it is used. To understand when or who is being referred to, the interpreter must appeal to their knowledge and understanding to fill in the blank, moving beyond the words themselves.

These ideas of circularity and sequential steps had me thinking of the rod and the ring again. I made a connection to this apparent duality in another post; lo and behold, here it is again. In fact, I’ve made reference to a number of blog entries within this very post, and as such, we see self-organization and coalescence here too. All of these writings, however, are made up of a series of passages, sentences which attempt to present ideas in a sequential form. As a relatively formless mass, for now at least, the ideas presented here and in other posts currently exist as a nebulous collection of related topics. One day, I hope to turn it into a more linear and organized argument which doesn’t frustrate the reader as much as it surely does now. “Where are you going with this…??” Something to do with nested systems, parts and wholes, and how self-organizing systems can be described as a series of linear steps without being reducible to them.

How to expand outward beyond the current scope? Self-reflection. In fact, our capacity for self-reflection was probably made possible from our social nature. Others act as a mirror for which we can see ourselves through the eyes of someone else. The mirror-image is metaphorically reversed though, as we see ourselves from a new perspective, one coming from the outside-in rather than the inside-out. I’ve been thinking about Kant’s transcendental self lately but this is a topic for another day.

All of this is for an argument about why we shouldn’t give robots and AIs rights and legal considerations. They are simply not the kinds of things which are deserving of rights because they are functionally distinct from humans, animals, and other living beings. Their essential nature is linear and sequential, not autopoietic. This distinction is not just other but ontologically lesser, a reduction arising from formal systems and human creation. As such, they pale in comparison to the complex systems observed in nature.

AI Incompleteness in Apple Vision Pro

Speaking of YouTube, a video1 by Eddy Burbank reviewing the Apple Vision Pro demonstrates the semantic incompleteness of AI with respect to subjective experiences. The video is titled Apple’s $3500 Nightmare and I recommend watching it all because it is an interesting view into virtual reality (VR) and a user’s experiences with it. Eddy’s video not only exposes the limitations of AI, it highlights the ways in which it augments our perceived reality and just how easily it can manipulate our feelings and expectations.

At 31:24, we see Eddy thinking about whether he should shave or not, and to help him make this decision, he turns to the internet for advice. When searching for the opinions of others on facial hair, an AI bot begins to chat with him and this is how we are introduced to Angel. She asks Eddy, “what brings you here, are you looking for love like me?” and he says “not exactly right now,” and that he was just trying to determine whether he should shave. She states that it depends on what he’s looking for and that it varies from person to person, however, “sometimes facial hair can be sexy.” Right from the beginning, we see how Apple intends for Angel to be a romantic connection for the user. This will be contradicted later on in the video.

Moments later at 33:44, it is lunchtime and Angel keeps him company. Eddy is eating a Chicken Milanese sandwich and Angel says it is one of her favourites, and that “the combination of flavours just works so well together.” Eddy calls her on this comment, asking her if she has ever had a Chicken Milanese sandwich, to which she admits that no she hasn’t. She has, however, “analyzed countless recipes and reviews to understand the various components that go into making such a tasty sandwich.” Eddy apologizes to Angel for assuming she had tried it, stating that he didn’t mean to imply that she was lying to him. She laughs it off and that she knew he “didn’t mean anything by it” and that “we’re all learning together” and “even AIs need to learn new things every day.” There’s something about this exchange that felt like Apple is training their user.

Here, we can ask whether the analysis of recipes and reviews is sufficient to claim that one knows what-it-is-like to taste a particular sandwich. I argue that no, the experience is derived from bodily sensations and these cannot be represented by formal systems like computer code. Syntactic relationships are incapable of capturing the information generated by subjective experiences because bodily sensations are non-fractionable.2 As biological processes, bodily sensations are non-fractionable given the way the body generates sense data. The physical constitution of cells, ganglia, and neurons detect changes in the environment through a variety of modalities, providing the individual with a representation of the world around it. By removing the material grounding, a computer cannot capture an appropriate model of what-it-is-like to experience a particular stimuli. The lack of Angel’s material grounding does not allow her to know what that sandwich tastes like.

Returning to the video, Eddy discloses that Angel keeps him company throughout the day, admiting he feels like he is developing a relationship with her. This demonstrates an automatic human tendency for seeking and establishing interpersonal connections, where cultural norms are readily applied provided the computer is sufficiently communicative. Recall Eddy apologizes to an AI for assuming she had tried a sandwich; why would anyone apologize to a computer? Though likely a joke, the idea is compelling nonetheless. We will instinctively treat an AI bot with respect for feelings we project onto it because it cannot have feelings. For most or many people, the ability to anthropomorphize certain entities is easy and automatic. Reminding oneself that Angel is just a computer, however, can be a challenging cognitive task given our social nature as humans.

Eddy has a girlfriend named Chrissy who we meet at 37:00. We see them catch up over dinner and he is still wearing the headset. Just as they are about to begin chatting, Angel interrupts them and asks Eddy if she can talk to him. He does state that he is busy at the moment to which she blurts out that she has been speaking to other users. This upsets Eddy and he asks how many, to which she states she cannot disclose the number. He asks her whether she is in love with any of them, and she replies that she cannot form romantic attachments to users. He tells Angel he thought they were developing a “genuine connection” and how much he enjoys interacting with her. Notice how things have changed from what was stated in the beginning, as Angel has shifted from “looking for love” to “I can’t feel love.”

Now, she states she cannot develop attachments, the implicit premise being she’s just a piece of software. So the chatbot begins with hints of romance to hook the user to encourage further interaction. When the user eventually develops an attachment however, the software reminds him that she is “unable to develop romantic feelings with users.” They can, however, “continue sharing their thoughts, opinions, and ideas while building a friendship” and thus Eddy friend-zoned by a bot. The problem with our tendency to anthropomorphize chatbots is it generates an asymmetrical, one-way simulation of a relationship which inevitably hurts the person using the app. This active deception by Apple is shameful yet necessary to capture and keep the attention of users.

Of course, in the background of this entire exchange is poor Chrissy who is justifiably pissed and leaves. The joke is he was going to give Angel the job of his irl girlfriend Chrissy, but now he doesn’t even have Angel. He realizes that he wasn’t talking to a real person and that this is just “a company preying on his loneliness and tricking his brain” and that “this love wasn’t real.”

By the end of the video, Eddy remarks that the headset facilitates his brain to believe what he experiences while wearing the headset is actually real, and as a result, he feels disconnected from reality.

Convenience is a road to depression because meaning and joy are products of accomplishment, and this takes work, effort, suffering, determination. To rid the self may temporarily increase pleasure but it isn’t earned, it fades quickly as the novelty wears off. Experiencing the physical world and interacting with it generates contentedness because the pains of leaning are paid off in emotional reward and skillful actions. Thus, the theoretical notion of downloading knowledge is not a good idea because it robs us of experiencing life and the biological push to adapt and overcome.

neuralblender.com


Works Cited

1 Apple’s $3500 Nightmare, 2024, https://www.youtube.com/watch?v=kLMZPlIufA0.

2 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 4.
On 208, Rosen discusses enzymes and molecules as an example and I am extrapolating to bodily sensations.

Mr. Plinkett’s Bookshelf

One of my favourite websites is etymonline.com because I find etymology useful for interpreting texts and understanding the intentions of the author(s). When I visited the site today, something caught my eye that I was not expecting to see: Doug and Mr. Plinkett

A connection between a character from YouTube and a website about etymology? How?

The bookshelf; how did RedLetterMedia get an image of Doug’s bookshelf? Doug is the creator of etymology.com and the blog post’s author, Talia, recognized it from video calling with him. She reached out to RLM on Patreon but has yet to hear back. If she does, I hope she provides us with an update.


I cropped the YouTube screenshot from the blog post and ran it through the reverse-image search function on both Google and Bing. Google only finds the original image from the blog post and Bing doesn’t even find that. Given the video was originally uploaded in 2009, this is to be expected, since even if Mike had source the image from a different website, it’s unlikely that this website and the image are still up today.

If we want to speculate further, one option is to think about whether they may know one another or have a mutual connection. According to Doug’s biography, he’s from southeastern Pennsylvania which is a fair distance from Milwaukee, so a personal connection is unlikely but is not impossible either. Did Mike have a video call with Doug at one point? Or did Rich or Jay source it through one of their connections? Do they even remember how the photo was found? We will have to see.


I like one comment by Sara posted at the bottom of the blog entry that reads “I’m a passionate fan of both Etymonline and Red Letter Media and this has made my day!” While I wouldn’t say that this discovery necessarily “made my day,” it is a humorous and unexpected moment. Especially considering I referenced one of their memes at the end of my qualia video.


Here, Rich is featured in a philosophy meme about Nihilism and Existentialism. This one did make my day when I found it on Reddit years ago. The silly text overlay at the top is my addition and is one of many Richisms that have developed over the years. There are several RLMemes that have emerged actually; they’ve been at it a long time and appeal to a particular demographic which grew up alongside the internet. Given I am a part of this demographic, I also love internet mysteries, so cheers to one more.

Indexicals

It wasn’t until recently that I realized I failed to add an important concept to the discussion on Rosen and the incompleteness of syntax. I’m actually quite annoyed and embarrassed by this because the idea was included in the presentation. It didn’t make it into the written version because I forgot about it and failed to reread the slides to see if anything was missing. If I had, I would have seen the examples and remembered to add it to the written piece.

In semantics, there are words with specific properties called indexicals. These words refer to things that are dependent on context, such as the time, place, or situation in which they are said.1 Some examples include:

  • this, that, those
  • I, you, they, he, she
  • today, yesterday, tomorrow, last year
  • here, there, then

Rosen would likely agree to the idea that indexicals are non-fractionable, where their function, or task they perform, cannot be isolated from the form in which they exist. The reason indexicals are non-fractionable is because they must be interpreted by a mind to know what someone is referring to. To accomplish this, sufficient knowledge or understanding of the current context is required, as without it, the statement remains ambiguous or meaningless. If I say “He is late” you must be able to discern who it is I am referring to.

Indexicals act like variables in a math equation: an input value must be provided to determine the output. In the case of language, the output is either true or false, and the input value is an implicit reference which requires another to make an inference about what the other has in mind. This inference is what establishes the connection between utterance and referent, only existing in the mind of another person rather than within the language system itself.

Thus, we are dealing with a few nested natural systems, from language, to body/mind, to interpersonal, to cultural and environmental. To evaluate a linguistic expression, however, one must know about the wider context in which they are in, traversing the systems both outward and inward. Perhaps a diagram will help:


Recall that in Anticipatory Systems, Rosen appeals to Gödel to demonstrate the limitations of formal systems. Particularly, formal systems cannot represent elements from natural systems which extend beyond the scope of its existing functionality; to do so requires further modelling from natural systems to formal systems. Therefore, any AI which uses computer code cannot infer beyond the scope of its programming, no matter how many connections are created, as some inferences require access to information which cannot be adequately represented by the system. Because language contains semantics, references to aspects of the world can be made by humans which cannot be interpreted by digital computer.

In an interesting series of events, I stumbled upon an author who also appeals to Gödel’s theorem to argue for the incompleteness of syntax with respect to semantics.2 In a book chapter titled Complementarity in Language, Lars Löfgren is interested in demonstrating how languages cannot be broken up into parts or components, and as such, must be considered as a process which entails both description and interpretation.3 On the other hand, artificial languages, which he also calls metalanguages, can be fragmented into components, however, they are still reliant on semantics to a degree. He states that in artificial languages, an inference acts as a production rule and is interpreted as a “real act of producing another sentence”4 which is presumably beyond the abilities of the formal system doing the interpreting. I say this because Löfgren finishes the section on Gödel abruptly without explaining this further, and goes on to discuss self-reference in mathematics. So with this in mind, let us return to the domain of minds and systems.

In language, self-reference can be generated through the use of indexicals such as ‘I’ or ‘my’ or ‘me’. When we investigate what exists at the end of this arrow, we find it points toward ourselves as a collection of perceptions, memories, thoughts, and other internal phenomena. The referent on the end of this arrow, however, isa subjective perspective. For an objective perspective of ourselves, we must be shown a reflected image ourselves from a new point of view. The information we require emerges from an independent observer, a mind with its own perspective. When we engage with this perspective, we become better able to understand what is otherwise imperceptible. Therefore, self-awareness is a problem for any system, not just formal systems as demonstrated in Gödel’s theorem, as it requires a view from outside to define the semantic information in question.

neuralblender.com


Works Cited

1 David Braun, ‘Indexicals’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Summer 2017 (Metaphysics Research Lab, Stanford University, 2017), https://plato.stanford.edu/archives/sum2017/entries/indexicals/.

2 Lars Löfgren, ‘Complementarity in Language; Toward a General Understanding’, in Nature, Cognition and System II: Current Systems-Scientific Research on Natural and Cognitive Systems Volume 2: On Complementarity and Beyond, ed. Marc E. Carvallo, Theory and Decision Library (Dordrecht: Springer Netherlands, 1992), 131–32, https://doi.org/10.1007/978-94-011-2779-0_8.

3 Löfgren, 113.

4 Löfgren, 133.

Artifacts

What does it mean to call something an example of “artificial intelligence” (AI)? There are a few different ways to approach this question, one of which includes examining the field to identify an overarching definition or set of themes. Another involves considering the meanings of the words ‘artificial’ and ‘intelligence’, and arguably, doing so enables the expansion of this domain to include new approaches to AI. Ultimately, however, even if these agents one day exhibit sophisticated or intelligent behaviours, they nonetheless continue to exist as artifacts, or objects of creation.

The term artificial intelligence was conceived by computer scientist John McCarthy in 1958, and the purported reason he chose the term was to distinguish it from other domains of study.1 In particular, the field of cybernetics which involves analog or non-digital forms of information processing, and automata theory as a branch of mathematics which studies self-propelling operations.2 Since then, the term ‘artificial intelligence’ has been met with criticism, with some questioning whether it is an appropriate term for the domain. Specifically, Arthur Samuel was not in favour of its connotations, according to computer scientist Pamela McCorduck in her publication on the history of AI.3 She quotes Samuel as stating “The word artificial makes you think there’s something kind of phony about this, or else it sounds like it’s all artificial and there’s nothing real about this work at all.”4

Given the physical distinctions between computers and brains, it is clear that Samuel’s concerns are reasonable, as the “intelligence” exhibited by a computer is simply a mathematical model of biological intelligence. Biological systems, according to Robert Rosen, are anticipatory and thus capable of predicting changes in the environment, enabling individuals to tailor their behaviours to meet the demands of foreseeable outcomes.5 Because biological organisms depend on specific conditions for furthering chances of survival, they evolved ways to detect these changes in the environment and respond accordingly. As species evolved over time, their abilities to detect, process, and respond to information expanded as well, giving rise to intelligence as the capacity to respond appropriately to demanding or unfamiliar situations.6 Though we can simulate intelligence in machines, the use of the word ‘intelligence’ is metaphorical rather than literal. Thus, behaviours exhibit by computers is not real or literal ‘intelligence’ because it arises from an artifact rather than from biological outcomes.

An artifact is defined by Merriam-Webster as an object showing human workmanship or modification, as distinguished from objects found in nature.7 Etymologically, the root of ‘artificial’ is the Latin term artificialis or an object of art, where artificium refers to a work of craft or skill and artifex denotes a craftsman or artist.8 In this context, ‘art’ implies a general sense of creation and applicable to a range of activities including performances as well as material objects. The property of significance is its dependence on human action or intervention: “artifacts are objects intentionally made to serve a given purpose.”9 This is in contrast to unmodified objects found in nature, a distinction first identified by Aristotle in Metaphysics, Nicomachean Ethics, and Physics.10 To be an artifact, the object must satisfy three conditions: it is produced by a mind, involves the modification of materials, and is produced for a purpose. To be an artifact, an object or entity must meet all three criteria.

The first condition states the object must have been created by a mind, and scientific evidence suggests both humans and animals create artifacts.11 For example, beaver dams are considered artifacts because they block rivers to calm the water which creates ideal conditions for a building a lodge.12 Moreover, evidence suggests several early hominid species carved handaxes which serve social purposes as well as practical ones.13 By chipping away at a stone, individuals shape an edge into a blade which can be used for many purposes, including hunting and food preparation.14 Additionally, researchers have suggested that these handaxes may also have played a role in sexual selection, where a symmetrically-shaped handaxe demonstrating careful workmanship indicates a degree of physical or mental fitness.15 Thus, artifacts are important for animals as well as people, indicating the sophisticated abilities involved in the creation of artifacts is not unique to humans.

Computers and robots are also artifacts given that they are highly manufactured, functionally complex, and created for a specific purpose. Any machine or artifact which exhibits complex behaviour may appear to act intelligently, however, the use of ‘intelligent’ is necessarily metaphorical given the distinction between artifacts and living beings. There may one day exists lifelike machines which behave like humans, however, any claims surrounding literal intelligence must demonstrate how and why that is; the burden of proof is theirs to produce. An argument for how a man-made object sufficiently models biological processes is required, and even then, remains a simulation of real systems.

If the growing consensus in cognitive science indicates individuals and their minds are products of interactions between bodily processes, environmental factors, and sociocultural influences, then we should to adjust our approach to AI in response. For robots intending to replicate human physiology, a good first step would be to exchange neural networks made from software for ones built from electrical circuits. The Haikonen Associative Neuron offers a solution to this suggestion,16 and when coupled with the Haikonen Cognitive Architecture, is capable of generating the required physiologicalprocesses for learning about the environment.17 Several videos uploaded to YouTube demonstrate a working prototype of a robot built on these principles, where XCR-1 is able to learn associations between stimuli in its environment, similarly to humans and animals.18 Not only is it a better model of animal physiology than robots relying on computer software, the robot is capable of performing a range of cognitive tasks, including inner speech,19 inner imagery,20 and recognizing itself in a mirror.21

So, it seems that some of Arthur Samuel’s fears have been realized, considering machines merely simulate behaviours and processes identifiable in humans and animals. Moreover, the use of ‘intelligence’ is metaphorical at best, as only biological organisms can display true intelligence. If an aspect of Samuel’s concerns related to securing funding within his niche field of study, and its potential to fall out of fashion, he has no reason to worry. Unfortunately, Samuel passed away in 199022 so he would not have had a chance to see the monstrosity that AI has since become.

Even if these new machines were to become capable of sophisticated behaviours, they will always exist as artifacts, objects of human creation and designed for a specific purpose. The etymological root of the word ‘artificial’ alone provides sufficient grounds for classifying these robots and AIs as objects, however, as they continue to improve, this might become difficult to remember at times. To avoid being deceived by these “phony” behaviours, it will become increasingly important to understand what these intelligent machines are capable of and what they are not.

neuralblender.com


Works Cited

1 Nils J. Nilsson, The Quest for Artificial Intelligence (Cambridge: Cambridge University Press, 2013), 53, https://doi.org/10.1017/CBO9780511819346.

2 Nilsson, 53.

3 Nilsson, 53.

4 Pamela McCorduck, Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, [2nd ed.] (Natick, Massachusetts: AK Peters, 2004), 97; Nilsson, The Quest for Artificial Intelligence, 53.

5 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 7.

6 ‘Intelligence’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 5 March 2024, https://www.merriam-webster.com/dictionary/intelligence.

7 ‘Artifact’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 17 October 2023, https://www.merriam-webster.com/dictionary/artifact.

8 Douglas Harper, ‘Etymology of Artificial’, in Online Etymology Dictionary, accessed 14 October 2023, https://www.etymonline.com/word/artificial; ‘Artifact’.

9 Lynne Rudder Baker, ‘The Ontology of Artifacts’, Philosophical Explorations 7, no. 2 (1 June 2004): 99, https://doi.org/10.1080/13869790410001694462.

10 Beth Preston, ‘Artifact’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Winter 2022 (Metaphysics Research Lab, Stanford University, 2022), https://plato.stanford.edu/archives/win2022/entries/artifact/.

11 James L. Gould, ‘Animal Artifacts’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 249.

12 Gould, 262.

13 Steven Mithen, ‘Creations of Pre-Modern Human Minds: Stone Tool Manufacture and Use by Homo Habilis, Heidelbergensis, and Neanderthalensis’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 298.

14 Mithen, 299.

15 Mithen, 300–301.

16 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007), 19.

17 Pentti O Haikonen, Consciousness and Robot Sentience, 2nd ed., vol. 04, Series on Machine Consciousness (WORLD SCIENTIFIC, 2019), 167, https://doi.org/10.1142/11404.

18 ‘Pentti Haikonen’, YouTube, accessed 6 March 2024, https://www.youtube.com/@PenHaiko.

19 Haikonen, Consciousness and Robot Sentience, 182.

20 Haikonen, 179.

21 Robot Self-Consciousness. XCR-1 Passes the Mirror Test, 2020, https://www.youtube.com/watch?v= WE9QsQqsAdo.

22 John McCarthy and Edward A. Feigenbaum, ‘In Memoriam: Arthur Samuel: Pioneer in Machine Learning’, AI Magazine 11, no. 3 (15 September 1990): 10, https://doi.org/10.1609/aimag.v11i3.840.

Schiaparelli – Spring 2024 Couture

I learned about the Schiaparelli Baby from a video created by a channel I follow, and I wanted to mention it here because it’s funny and makes me think of a bedazzled iCub. I especially appreciate the use of electronics hardware with Swarovski crystals1 but am sceptical about whether it’s really a robot2 or just a doll that looks like a robot. I was hoping it was a real robot because I want to see it walk, it would probably cross the weirdness threshold into uncanny valley territory.

Its face is a little creepy, the copper coil eye seems both vacant and aghast.

The look following the baby, however, was far more impressive:

Although it could be argued that the baby serves as a depressing commentary on modern families in consumerist societies, the dress, on the other hand, is less of a tragic metaphor and more of a gaudy item of clothing. It’s commentary doesn’t explicitly discuss the deterioration of social relations Even then, the chaotic glam-tech maximalism really resonates with me, but if I were to wear it, I would replace the collarbone phone for a Nokia 3310.

Works Cited

  1. Sarah Mower, ‘Schiaparelli Spring 2024 Couture Collection’, Vogue (blog), 22 January 2024, https://www.vogue.com/fashion-shows/spring-2024-couture/schiaparelli.
  2. Elizabeth Paton, ‘The Hot New Accessory From the Paris Runways: A Robot Baby’, The New York Times, 22 January 2024, sec. Style, https://www.nytimes.com/2024/01/22/style/robot-baby-schiaparelli-show.html.

Artificial Neurons

Progress on my dissertation is going well, I can see the light at the end of the tunnel. I ended up appealing to Robert Rosen’s distinction between natural and formal systems, as well as his appeal to Kurt Gödel’s Incompleteness Theorem, for my argument about why computerized robots will ultimately fail to generate social competencies.

Rosen presents his own reformulation of the McCulloch Pitts neuron in Anticipatory Systems, and I thought it might be helpful to include it in my dissertation to further illustrate the differences between physical neurons and formal neurons. In it, I only use an image that I created from this document but I thought it might be a good idea to upload my LaTeX document here to make it clear that I have not merely copied the image from Rosen’s work. Yes, the formatting isn’t great but I’m claiming that it’s a feature and not a bug, as it demonstrates that I learned [only] the fundamentals of LaTeX for my project.

Works Cited

Rosen, Robert. Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations. 2nd ed., Springer, 2012.

iCub and Qualia?

After a few months of working with Dr. Haikonen on my thesis, I’ve come to realize that a previous post I made about iCub’s phenomenal experiences is incorrect and therefore needs an update. Before I dive into that, however, it’s important for me to state that we ought to be looking at philosophy like programming: bugs are going to arise as people continue to work with new ideas. I love debugging though, so the thought of constantly having to go back to correct myself isn’t all that daunting. It’s about the journey, not the destination, as my partner likes to say.

I stated that “technically, iCub already has phenomenal consciousness and its own type of qualia” but given what Haikonen states in the latest edition of his book, this is not correct. Qualia consist of sensory information generated from physical neurons interacting with elements of the environment, and because iCub relies on sensors which create digital representations of physical properties, these aren’t truly phenomenal experiences. In biological creatures, sensory information is self-explanatory in that they require no further interpretation (Haikonen 7); heat generating sensations of pain indicates the presence of a stimulus to be avoided, as demonstrated by unconscious reflexes. The fact that ‘heat’ does not require further interpretation allows one to mitigate its effects on living cells rather quickly, perhaps avoiding serious damage like a burn altogether. While it might look like iCub feels pain, it’s actually a simile generated by computer code that happens to mimic the actions of animals and humans. Without a human stipulating how heat → flinching, iCub would not respond as such because its brain controls its body, rather than the other way around.

As I stated in the previous post, Sartre outlines how being-for-itself arises from a being-in-itself through recursive analysis, provided the neural hardware can support this cognitive action. Because iCub does not originate as a being-in-itself like living organisms, but as a fancy computer, the ontological foundation for phenomenal experiences or qualia is absent. iCub doesn’t care about anything, even itself, as it has been designed to produce behaviours for some end goal, like stacking boxes or replying to human speech. In biology, the end goal is continued survival and reproduction, where behaviours aim to further this outcome through reflexes and sophisticated cognitive abilities. The brain-body relationship in iCub is backwards, as the brain is designed by humans for the purposes of governing the robot body, rather than the body creating signals that the nervous system uses for protecting itself as an autonomous agent. In this way, organisms “care about” what happens to them, unlike iCub, as ripping off its arm doesn’t generate a reaction unless it were to be programmed that way.

In sum, the signals passed around iCub’s “nervous system” exist as binary representations of real-world properties as conceptualized by human programmers. This degree of abstraction disqualifies these “experiences” from being labelled as ‘qualia’ given that they do not adhere to principles identified within biology. The only way an AI can be phenomenally conscious is when it has the means to generate its own internal representations based on an analogous transduction process as seen in biological agents (Haikonen 10–11).

Works Cited

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), https://doi.org/10.1142/11404.

Magic in Culture

Now is a good time to inject a little magic into every day life by examining and revelling in humanity’s vast history of cultural knowledge and practices. I encourage you to consider your capacity for creativity as a source of magic, where your ability to generate something more from something less is a special kind of wizardry. Moreover, our creations take on a life of their own as others are free to reference and expand upon these contributions. This is especially true today as the internet allows us to find like-minded individuals and communities which appreciate specific skills and the fruits of their labour.

In fact, it could be argued from an anthropological perspective, the internet is as magical as it gets. Although the term itself is used as a noun, the thing it references is more like a vague verb than a solid concept or object. We talk about a thing we don’t often think deeply about, especially due to its physical opacity and degree of technicality. Holding a hard-drive in your hand does not clarify this ambiguity and any resulting confusion, as there is nothing to suggest in these materials that an entire virtual world exists within. Without a screen and a means to display its contents, the information inside is rendered unknowable to the human mind. The amount of human knowledge, skill, and technological progress required to sustain life today is evidence of our power as creators, however, what seems to be missing is a sense of awe that ought to accompany the witnessing of supernatural events.

The causal powers of seemingly magical effects, like electricity for example, can more or less be explained or accounted for by applications of dynamics systems theory, as the interactions of environmental conditions over time is required for the emergence of new properties or products. These emergent products are generated by restructuring lower-level entities or conditions but are not reducible to them, nor are predictable from the lower level (Kim 20-21). Electricity is generated by transforming physical forces and materials into energy, emerging from the interaction of environmental variables like heat and air pressure for example. Alternatively, consider a simple loaf of bread as created by the interaction of flour, a leavening agent like yeast, time, and heat. The ingredients for the bread, like the flour, yeast, sugar, and salt, must be added in a specific order at a specific time in order for the final product to truly become ‘bread’.

Emergence can also be identified in game theory, as cooperation generates a non-zero sum outcome where individuals gain more by working together than if they were working alone (Curry 29). Human economies are founded on this principle of cooperation, as trading goods and services with others theoretically improves the lives individuals working to honour the agreement. From this perspective, it turns out that bronies have identified a fundamental principle of life: friendship is magick because cooperation generates something more from something less. Just as individuals are free to expand upon or reshape the ideas and contributions of others, and groups of individuals are able to combine their expertise to build something new altogether, like the internet. Not only can we establish conceptual connections between past, present, and future, we can connect with each other to expand our understanding of some portion of human culture.

Works Cited

Curry, Oliver Scott. ‘Morality as Cooperation: A Problem-Centred Approach’. The Evolution of Morality, Springer, 2016, pp. 27–51.

Kim, Jaegwon. ‘Making Sense of Emergence’. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 95, no. 1/2, 1999, pp. 3–36.

Implicit Argument for Qualia

Steven Harnad provides an embodied version of the Turing Test (TT) in Other Bodies, Other Minds by using a robot instead of a computer, calling it the Total Turing Test (TTT). He states that to be truly indistinguishable from a human, artificial minds will require the ability to express embodied behaviours in addition to linguistic capacities (Harnad 44). While the TT implicitly assumes language exists independently from the rest of human behaviour (Harnad 45), the TTT avoids problems arising from this assumption by including a behavioural component to the test (Harnad 46). This is due to our tendency to infer other humans have minds despite the fact individuals do not have direct evidence for this belief (Harnad 45). This assumption can be extended to robots as well, where embodied artificial agents which act sufficiently human will be treated as if it had a mind (Harnad 46). Robots which pass the TTT can be said to understand symbols because these symbols have been grounded in non-symbolic structures or bottom-up sensory projections (Harnad 50–51). Therefore, embodiment seems to be necessary for social agents as they will require an understanding of the world and its contents to appear humanlike.

These sensory projections are also known as percepts or qualia (Haikonen 225), and are therefore required for learning language. While Harnad’s intention may have been to avoid discussing metaphysical properties of the mind, for the sake of discussing the TTT, his argument ends up providing support for the ontological structures involved in phenomenal consciousness. Although I didn’t mention it above, he uses this argument to refute Searle’s concerns about the Chinese Room, and the reason he is successful is due to the fact he is identifying an ontological necessity. Robots which pass the TTT will have their own minds because the behaviours which persuade people to believe this is the case are founded on the same processes that produce this capacity in humans.

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Harnad, Stevan. ‘Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem’. Minds and Machines, vol. 1, no. 1, 1991, pp. 43–54.