Category: Technology

AI Incompleteness in Apple Vision Pro

Speaking of YouTube, a video1 by Eddy Burbank reviewing the Apple Vision Pro demonstrates the semantic incompleteness of AI with respect to subjective experiences. The video is titled Apple’s $3500 Nightmare and I recommend watching it all because it is an interesting view into virtual reality (VR) and a user’s experiences with it. Eddy’s video not only exposes the limitations of AI, it highlights the ways in which it augments our perceived reality and just how easily it can manipulate our feelings and expectations.

At 31:24, we see Eddy thinking about whether he should shave or not, and to help him make this decision, he turns to the internet for advice. When searching for the opinions of others on facial hair, an AI bot begins to chat with him and this is how we are introduced to Angel. She asks Eddy, “what brings you here, are you looking for love like me?” and he says “not exactly right now,” and that he was just trying to determine whether he should shave. She states that it depends on what he’s looking for and that it varies from person to person, however, “sometimes facial hair can be sexy.” Right from the beginning, we see how Apple intends for Angel to be a romantic connection for the user. This will be contradicted later on in the video.

Moments later at 33:44, it is lunchtime and Angel keeps him company. Eddy is eating a Chicken Milanese sandwich and Angel says it is one of her favourites, and that “the combination of flavours just works so well together.” Eddy calls her on this comment, asking her if she has ever had a Chicken Milanese sandwich, to which she admits that no she hasn’t. She has, however, “analyzed countless recipes and reviews to understand the various components that go into making such a tasty sandwich.” Eddy apologizes to Angel for assuming she had tried it, stating that he didn’t mean to imply that she was lying to him. She laughs it off and that she knew he “didn’t mean anything by it” and that “we’re all learning together” and “even AIs need to learn new things every day.” There’s something about this exchange that felt like Apple is training their user.

Here, we can ask whether the analysis of recipes and reviews is sufficient to claim that one knows what-it-is-like to taste a particular sandwich. I argue that no, the experience is derived from bodily sensations and these cannot be represented by formal systems like computer code. Syntactic relationships are incapable of capturing the information generated by subjective experiences because bodily sensations are non-fractionable.2 As biological processes, bodily sensations are non-fractionable given the way the body generates sense data. The physical constitution of cells, ganglia, and neurons detect changes in the environment through a variety of modalities, providing the individual with a representation of the world around it. By removing the material grounding, a computer cannot capture an appropriate model of what-it-is-like to experience a particular stimuli. The lack of Angel’s material grounding does not allow her to know what that sandwich tastes like.

Returning to the video, Eddy discloses that Angel keeps him company throughout the day, admiting he feels like he is developing a relationship with her. This demonstrates an automatic human tendency for seeking and establishing interpersonal connections, where cultural norms are readily applied provided the computer is sufficiently communicative. Recall Eddy apologizes to an AI for assuming she had tried a sandwich; why would anyone apologize to a computer? Though likely a joke, the idea is compelling nonetheless. We will instinctively treat an AI bot with respect for feelings we project onto it because it cannot have feelings. For most or many people, the ability to anthropomorphize certain entities is easy and automatic. Reminding oneself that Angel is just a computer, however, can be a challenging cognitive task given our social nature as humans.

Eddy has a girlfriend named Chrissy who we meet at 37:00. We see them catch up over dinner and he is still wearing the headset. Just as they are about to begin chatting, Angel interrupts them and asks Eddy if she can talk to him. He does state that he is busy at the moment to which she blurts out that she has been speaking to other users. This upsets Eddy and he asks how many, to which she states she cannot disclose the number. He asks her whether she is in love with any of them, and she replies that she cannot form romantic attachments to users. He tells Angel he thought they were developing a “genuine connection” and how much he enjoys interacting with her. Notice how things have changed from what was stated in the beginning, as Angel has shifted from “looking for love” to “I can’t feel love.”

Now, she states she cannot develop attachments, the implicit premise being she’s just a piece of software. So the chatbot begins with hints of romance to hook the user to encourage further interaction. When the user eventually develops an attachment however, the software reminds him that she is “unable to develop romantic feelings with users.” They can, however, “continue sharing their thoughts, opinions, and ideas while building a friendship” and thus Eddy friend-zoned by a bot. The problem with our tendency to anthropomorphize chatbots is it generates an asymmetrical, one-way simulation of a relationship which inevitably hurts the person using the app. This active deception by Apple is shameful yet necessary to capture and keep the attention of users.

Of course, in the background of this entire exchange is poor Chrissy who is justifiably pissed and leaves. The joke is he was going to give Angel the job of his irl girlfriend Chrissy, but now he doesn’t even have Angel. He realizes that he wasn’t talking to a real person and that this is just “a company preying on his loneliness and tricking his brain” and that “this love wasn’t real.”

By the end of the video, Eddy remarks that the headset facilitates his brain to believe what he experiences while wearing the headset is actually real, and as a result, he feels disconnected from reality.

Convenience is a road to depression because meaning and joy are products of accomplishment, and this takes work, effort, suffering, determination. To rid the self may temporarily increase pleasure but it isn’t earned, it fades quickly as the novelty wears off. Experiencing the physical world and interacting with it generates contentedness because the pains of leaning are paid off in emotional reward and skillful actions. Thus, the theoretical notion of downloading knowledge is not a good idea because it robs us of experiencing life and the biological push to adapt and overcome.

neuralblender.com


Works Cited

1 Apple’s $3500 Nightmare, 2024, https://www.youtube.com/watch?v=kLMZPlIufA0.

2 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 4.
On 208, Rosen discusses enzymes and molecules as an example and I am extrapolating to bodily sensations.

Mr. Plinkett’s Bookshelf

One of my favourite websites is etymonline.com because I find etymology useful for interpreting texts and understanding the author’s intention. When I visited the site today, something caught my eye that I was not expecting to see: Doug and Mr. Plinkett

A connection between a character from YouTube and a website about etymology? How?

The bookshelf; how did RedLetterMedia get an image of Doug’s bookshelf? Doug is the creator of etymology.com and the blog post’s author, Talia, recognized it from video calling with him. She reached out to RLM on Patreon but has yet to hear back. If she does, I hope she provides us with an update.


I cropped the YouTube screenshot from the blog post and ran it through the reverse-image search function on both Google and Bing. Google only finds the original image from the blog post and Bing doesn’t even find that. Given the video was originally uploaded in 2009, this is to be expected, since even if Mike had source the image from a different website, it’s unlikely that this website and the image are still up today.

If we want to speculate further, one option is to think about whether they may know one another or have a mutual connection. According to Doug’s biography, he’s from southeastern Pennsylvania which is a fair distance from Milwaukee, so a personal connection is unlikely but is not impossible either. Did Mike have a video call with Doug at one point? Or did Rich or Jay source it through one of their connections? Do they even remember how the photo was found? We will have to see.


I like one comment by Sara posted at the bottom of the blog entry that reads “I’m a passionate fan of both Etymonline and Red Letter Media and this has made my day!” While I wouldn’t say that this discovery necessarily “made my day,” it is a humorous and unexpected moment. Especially considering I referenced one of their memes at the end of my qualia video.


Here, Rich is featured in a philosophy meme about Nihilism/Existentialism. This one did make my day when I found it on reddit years ago. The silly text overlay at the top is my addition and is one of many Richisms that have developed over the years. There are several RLMemes that have emerged actually; they’ve been at it a long time and appeal to a particular demographic which grew up alongside the internet. Given I am a part of this demographic, I also love internet mysteries, so cheers to one more.

Indexicals

It wasn’t until recently that I realized I failed to add an important concept to the discussion on Rosen and the incompleteness of syntax. I’m actually quite annoyed and embarrassed by this because the idea was included in the presentation. It didn’t make it into the written version because I forgot about it and failed to reread the slides to see if anything was missing. If I had, I would have seen the examples and remembered to add it to the written piece.

In semantics, there are words with specific properties called indexicals. These words refer to things that are dependent context, such as the time, place, or situation in which they are said.1 Some examples include:

  • this, that, those
  • I, you, they, he, she
  • today, yesterday, tomorrow, last year
  • here, there, then

Rosen would likely agree to the idea that indexicals are non-fractionable, where their function, or task they perform, cannot be isolated from the form in which they exist. The reason indexicals are non-fractionable is because they must be interpreted by a mind to know what someone is referring to. To accomplish this, sufficient knowledge or understanding of the current context is required, as without it, the statement remains ambiguous or meaningless. If I say “He is late” you must be able to discern who it is I am referring to.

Indexicals act like variables in a math equation: an input value must be provided to determine the output. In the case of language, the output is either true or false, and the input value is an implicit reference which requires another to make an inference about what the other has in mind. This inference is what establishes the connection between utterance and referent, only existing in the mind of another person rather than within the language system itself.

Thus, we are dealing with a few nested natural systems, from language, to body/mind, to interpersonal, to cultural and environmental. To evaluate a linguistic expression, however, one must know about the wider context in which they are in, traversing the systems both outward and inward. Perhaps a diagram will help:


Recall that in Anticipatory Systems, Rosen appeals to Gödel to demonstrate the limitations of formal systems. Particularly, formal systems cannot represent elements from natural systems which extend beyond the scope of its existing functionality; to do so requires further modelling from natural systems to formal systems. Therefore, any AI which uses computer code cannot infer beyond the scope of its programming, no matter how many connections are created, as some inferences require access to information which cannot be adequately represented by the system. Because language contains semantics, references to aspects of the world can be made by humans which cannot be interpreted by digital computer.

In an interesting series of events, I stumbled upon an author who also appeals to Gödel’s theorem to argue for the incompleteness of syntax with respect to semantics.2 In a book chapter titled Complementarity in Language, Lars Löfgren is interested in demonstrating how languages cannot be broken up into parts or components, and as such, must be considered as a process which entails both description and interpretation.3 On the other hand, artificial languages, which he also calls metalanguages, can be fragmented into components, however, they are still reliant on semantics to a degree. He states that in artificial languages, an inference acts as a production rule and is interpreted as a “real act of producing another sentence”4 which is presumably beyond the abilities of the formal system doing the interpreting. I say this because Löfgren finishes the section on Gödel abruptly without explaining this further, and goes on to discuss self-reference in mathematics. So with this in mind, let us return to the domain of minds and systems.

In language, self-reference can be generated through the use of indexicals such as ‘I’ or ‘my’ or ‘me’. When we investigate what exists at the end of this arrow, we find it points toward ourselves as a collection of perceptions, memories, thoughts, and other internal phenomena. The referent on the end of this arrow, however, isa subjective perspective. For an objective perspective of ourselves, we must be shown a reflected image ourselves from a new point of view. The information we require emerges from an independent observer, a mind with its own perspective. When we engage with this perspective, we become better able to understand what is otherwise imperceptible. Therefore, self-awareness is a problem for any system, not just formal systems as demonstrated in Gödel’s theorem, as it requires a view from outside to define the semantic information in question.

neuralblender.com


Works Cited

1 David Braun, ‘Indexicals’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Summer 2017 (Metaphysics Research Lab, Stanford University, 2017), https://plato.stanford.edu/archives/sum2017/entries/indexicals/.

2 Lars Löfgren, ‘Complementarity in Language; Toward a General Understanding’, in Nature, Cognition and System II: Current Systems-Scientific Research on Natural and Cognitive Systems Volume 2: On Complementarity and Beyond, ed. Marc E. Carvallo, Theory and Decision Library (Dordrecht: Springer Netherlands, 1992), 131–32, https://doi.org/10.1007/978-94-011-2779-0_8.

3 Löfgren, 113.

4 Löfgren, 133.

Artifacts

What does it mean to call something an example of “artificial intelligence” (AI)? There are a few different ways to approach this question, one of which includes examining the field to identify an overarching definition or set of themes. Another involves considering the meanings of the words ‘artificial’ and ‘intelligence’, and arguably, doing so enables the expansion of this domain to include new approaches to AI. Ultimately, however, even if these agents one day exhibit sophisticated or intelligent behaviours, they nonetheless continue to exist as artifacts, or objects of creation.

The term artificial intelligence was conceived by computer scientist John McCarthy in 1958, and the purported reason he chose the term was to distinguish it from other domains of study.1 In particular, the field of cybernetics which involves analog or non-digital forms of information processing, and automata theory as a branch of mathematics which studies self-propelling operations.2 Since then, the term ‘artificial intelligence’ has been met with criticism, with some questioning whether it is an appropriate term for the domain. Specifically, Arthur Samuel was not in favour of its connotations, according to computer scientist Pamela McCorduck in her publication on the history of AI.3 She quotes Samuel as stating “The word artificial makes you think there’s something kind of phony about this, or else it sounds like it’s all artificial and there’s nothing real about this work at all.”4

Given the physical distinctions between computers and brains, it is clear that Samuel’s concerns are reasonable, as the “intelligence” exhibited by a computer is simply a mathematical model of biological intelligence. Biological systems, according to Robert Rosen, are anticipatory and thus capable of predicting changes in the environment, enabling individuals to tailor their behaviours to meet the demands of foreseeable outcomes.5 Because biological organisms depend on specific conditions for furthering chances of survival, they evolved ways to detect these changes in the environment and respond accordingly. As species evolved over time, their abilities to detect, process, and respond to information expanded as well, giving rise to intelligence as the capacity to respond appropriately to demanding or unfamiliar situations.6 Though we can simulate intelligence in machines, the use of the word ‘intelligence’ is metaphorical rather than literal. Thus, behaviours exhibit by computers is not real or literal ‘intelligence’ because it arises from an artifact rather than from biological outcomes.

An artifact is defined by Merriam-Webster as an object showing human workmanship or modification, as distinguished from objects found in nature.7 Etymologically, the root of ‘artificial’ is the Latin term artificialis or an object of art, where artificium refers to a work of craft or skill and artifex denotes a craftsman or artist.8 In this context, ‘art’ implies a general sense of creation and applicable to a range of activities including performances as well as material objects. The property of significance is its dependence on human action or intervention: “artifacts are objects intentionally made to serve a given purpose.”9 This is in contrast to unmodified objects found in nature, a distinction first identified by Aristotle in Metaphysics, Nicomachean Ethics, and Physics.10 To be an artifact, the object must satisfy three conditions: it is produced by a mind, involves the modification of materials, and is produced for a purpose. To be an artifact, an object or entity must meet all three criteria.

The first condition states the object must have been created by a mind, and scientific evidence suggests both humans and animals create artifacts.11 For example, beaver dams are considered artifacts because they block rivers to calm the water which creates ideal conditions for a building a lodge.12 Moreover, evidence suggests several early hominid species carved handaxes which serve social purposes as well as practical ones.13 By chipping away at a stone, individuals shape an edge into a blade which can be used for many purposes, including hunting and food preparation.14 Additionally, researchers have suggested that these handaxes may also have played a role in sexual selection, where a symmetrically-shaped handaxe demonstrating careful workmanship indicates a degree of physical or mental fitness.15 Thus, artifacts are important for animals as well as people, indicating the sophisticated abilities involved in the creation of artifacts is not unique to humans.

Computers and robots are also artifacts given that they are highly manufactured, functionally complex, and created for a specific purpose. Any machine or artifact which exhibits complex behaviour may appear to act intelligently, however, the use of ‘intelligent’ is necessarily metaphorical given the distinction between artifacts and living beings. There may one day exists lifelike machines which behave like humans, however, any claims surrounding literal intelligence must demonstrate how and why that is; the burden of proof is theirs to produce. An argument for how a man-made object sufficiently models biological processes is required, and even then, remains a simulation of real systems.

If the growing consensus in cognitive science indicates individuals and their minds are products of interactions between bodily processes, environmental factors, and sociocultural influences, then we should to adjust our approach to AI in response. For robots intending to replicate human physiology, a good first step would be to exchange neural networks made from software for ones built from electrical circuits. The Haikonen Associative Neuron offers a solution to this suggestion,16 and when coupled with the Haikonen Cognitive Architecture, is capable of generating the required physiologicalprocesses for learning about the environment.17 Several videos uploaded to YouTube demonstrate a working prototype of a robot built on these principles, where XCR-1 is able to learn associations between stimuli in its environment, similarly to humans and animals.18 Not only is it a better model of animal physiology than robots relying on computer software, the robot is capable of performing a range of cognitive tasks, including inner speech,19 inner imagery,20 and recognizing itself in a mirror.21

So, it seems that some of Arthur Samuel’s fears have been realized, considering machines merely simulate behaviours and processes identifiable in humans and animals. Moreover, the use of ‘intelligence’ is metaphorical at best, as only biological organisms can display true intelligence. If an aspect of Samuel’s concerns related to securing funding within his niche field of study, and its potential to fall out of fashion, he has no reason to worry. Unfortunately, Samuel passed away in 199022 so he would not have had a chance to see the monstrosity that AI has since become.

Even if these new machines were to become capable of sophisticated behaviours, they will always exist as artifacts, objects of human creation and designed for a specific purpose. The etymological root of the word ‘artificial’ alone provides sufficient grounds for classifying these robots and AIs as objects, however, as they continue to improve, this might become difficult to remember at times. To avoid being deceived by these “phony” behaviours, it will become increasingly important to understand what these intelligent machines are capable of and what they are not.

neuralblender.com


Works Cited

1 Nils J. Nilsson, The Quest for Artificial Intelligence (Cambridge: Cambridge University Press, 2013), 53, https://doi.org/10.1017/CBO9780511819346.

2 Nilsson, 53.

3 Nilsson, 53.

4 Pamela McCorduck, Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, [2nd ed.] (Natick, Massachusetts: AK Peters, 2004), 97; Nilsson, The Quest for Artificial Intelligence, 53.

5 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 7.

6 ‘Intelligence’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 5 March 2024, https://www.merriam-webster.com/dictionary/intelligence.

7 ‘Artifact’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 17 October 2023, https://www.merriam-webster.com/dictionary/artifact.

8 Douglas Harper, ‘Etymology of Artificial’, in Online Etymology Dictionary, accessed 14 October 2023, https://www.etymonline.com/word/artificial; ‘Artifact’.

9 Lynne Rudder Baker, ‘The Ontology of Artifacts’, Philosophical Explorations 7, no. 2 (1 June 2004): 99, https://doi.org/10.1080/13869790410001694462.

10 Beth Preston, ‘Artifact’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Winter 2022 (Metaphysics Research Lab, Stanford University, 2022), https://plato.stanford.edu/archives/win2022/entries/artifact/.

11 James L. Gould, ‘Animal Artifacts’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 249.

12 Gould, 262.

13 Steven Mithen, ‘Creations of Pre-Modern Human Minds: Stone Tool Manufacture and Use by Homo Habilis, Heidelbergensis, and Neanderthalensis’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 298.

14 Mithen, 299.

15 Mithen, 300–301.

16 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007), 19.

17 Pentti O Haikonen, Consciousness and Robot Sentience, 2nd ed., vol. 04, Series on Machine Consciousness (WORLD SCIENTIFIC, 2019), 167, https://doi.org/10.1142/11404.

18 ‘Pentti Haikonen’, YouTube, accessed 6 March 2024, https://www.youtube.com/@PenHaiko.

19 Haikonen, Consciousness and Robot Sentience, 182.

20 Haikonen, 179.

21 Robot Self-Consciousness. XCR-1 Passes the Mirror Test, 2020, https://www.youtube.com/watch?v= WE9QsQqsAdo.

22 John McCarthy and Edward A. Feigenbaum, ‘In Memoriam: Arthur Samuel: Pioneer in Machine Learning’, AI Magazine 11, no. 3 (15 September 1990): 10, https://doi.org/10.1609/aimag.v11i3.840.

Schiaparelli – Spring 2024 Couture

I learned about the Schiaparelli Baby from a video created by a channel I follow, and I wanted to mention it here because it’s funny and makes me think of a bedazzled iCub. I especially appreciate the use of electronics hardware with Swarovski crystals1 but am sceptical about whether it’s really a robot2 or just a doll that looks like a robot. I was hoping it was a real robot because I want to see it walk, it would probably cross the weirdness threshold into uncanny valley territory.

Its face is a little creepy, the copper coil eye seems both vacant and aghast.

The look following the baby, however, was far more impressive:

Although it could be argued that the baby serves as a depressing commentary on modern families in consumerist societies, the dress, on the other hand, is less of a tragic metaphor and more of a gaudy item of clothing. It’s commentary doesn’t explicitly discuss the deterioration of social relations Even then, the chaotic glam-tech maximalism really resonates with me, but if I were to wear it, I would replace the collarbone phone for a Nokia 3310.

Works Cited

  1. Sarah Mower, ‘Schiaparelli Spring 2024 Couture Collection’, Vogue (blog), 22 January 2024, https://www.vogue.com/fashion-shows/spring-2024-couture/schiaparelli.
  2. Elizabeth Paton, ‘The Hot New Accessory From the Paris Runways: A Robot Baby’, The New York Times, 22 January 2024, sec. Style, https://www.nytimes.com/2024/01/22/style/robot-baby-schiaparelli-show.html.

Artificial Neurons

Progress on my dissertation is going well, I can see the light at the end of the tunnel. I ended up appealing to Robert Rosen’s distinction between natural and formal systems, as well as his appeal to Kurt Gödel’s Incompleteness Theorem, for my argument about why computerized robots will ultimately fail to generate social competencies.

Rosen presents his own reformulation of the McCulloch Pitts neuron in Anticipatory Systems, and I thought it might be helpful to include it in my dissertation to further illustrate the differences between physical neurons and formal neurons. In it, I only use an image that I created from this document but I thought it might be a good idea to upload my LaTeX document here to make it clear that I have not merely copied the image from Rosen’s work. Yes, the formatting isn’t great but I’m claiming that it’s a feature and not a bug, as it demonstrates that I learned [only] the fundamentals of LaTeX for my project.

Works Cited

Rosen, Robert. Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations. 2nd ed., Springer, 2012.

iCub and Qualia?

After a few months of working with Dr. Haikonen on my thesis, I’ve come to realize that a previous post I made about iCub’s phenomenal experiences is incorrect and therefore needs an update. Before I dive into that, however, it’s important for me to state that we ought to be looking at philosophy like programming: bugs are going to arise as people continue to work with new ideas. I love debugging though, so the thought of constantly having to go back to correct myself isn’t all that daunting. It’s about the journey, not the destination, as my partner likes to say.

I stated that “technically, iCub already has phenomenal consciousness and its own type of qualia” but given what Haikonen states in the latest edition of his book, this is not correct. Qualia consist of sensory information generated from physical neurons interacting with elements of the environment, and because iCub relies on sensors which create digital representations of physical properties, these aren’t truly phenomenal experiences. In biological creatures, sensory information is self-explanatory in that they require no further interpretation (Haikonen 7); heat generating sensations of pain indicates the presence of a stimulus to be avoided, as demonstrated by unconscious reflexes. The fact that ‘heat’ does not require further interpretation allows one to mitigate its effects on living cells rather quickly, perhaps avoiding serious damage like a burn altogether. While it might look like iCub feels pain, it’s actually a simile generated by computer code that happens to mimic the actions of animals and humans. Without a human stipulating how heat → flinching, iCub would not respond as such because its brain controls its body, rather than the other way around.

As I stated in the previous post, Sartre outlines how being-for-itself arises from a being-in-itself through recursive analysis, provided the neural hardware can support this cognitive action. Because iCub does not originate as a being-in-itself like living organisms, but as a fancy computer, the ontological foundation for phenomenal experiences or qualia is absent. iCub doesn’t care about anything, even itself, as it has been designed to produce behaviours for some end goal, like stacking boxes or replying to human speech. In biology, the end goal is continued survival and reproduction, where behaviours aim to further this outcome through reflexes and sophisticated cognitive abilities. The brain-body relationship in iCub is backwards, as the brain is designed by humans for the purposes of governing the robot body, rather than the body creating signals that the nervous system uses for protecting itself as an autonomous agent. In this way, organisms “care about” what happens to them, unlike iCub, as ripping off its arm doesn’t generate a reaction unless it were to be programmed that way.

In sum, the signals passed around iCub’s “nervous system” exist as binary representations of real-world properties as conceptualized by human programmers. This degree of abstraction disqualifies these “experiences” from being labelled as ‘qualia’ given that they do not adhere to principles identified within biology. The only way an AI can be phenomenally conscious is when it has the means to generate its own internal representations based on an analogous transduction process as seen in biological agents (Haikonen 10–11).

Works Cited

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), https://doi.org/10.1142/11404.

Magic in Culture

Now is a good time to inject a little magic into every day life by examining and revelling in humanity’s vast history of cultural knowledge and practices. I encourage you to consider your capacity for creativity as a source of magic, where your ability to generate something more from something less is a special kind of wizardry. Moreover, our creations take on a life of their own as others are free to reference and expand upon these contributions. This is especially true today as the internet allows us to find like-minded individuals and communities which appreciate specific skills and the fruits of their labour.

In fact, it could be argued from an anthropological perspective, the internet is as magical as it gets. Although the term itself is used as a noun, the thing it references is more like a vague verb than a solid concept or object. We talk about a thing we don’t often think deeply about, especially due to its physical opacity and degree of technicality. Holding a hard-drive in your hand does not clarify this ambiguity and any resulting confusion, as there is nothing to suggest in these materials that an entire virtual world exists within. Without a screen and a means to display its contents, the information inside is rendered unknowable to the human mind. The amount of human knowledge, skill, and technological progress required to sustain life today is evidence of our power as creators, however, what seems to be missing is a sense of awe that ought to accompany the witnessing of supernatural events.

The causal powers of seemingly magical effects, like electricity for example, can more or less be explained or accounted for by applications of dynamics systems theory, as the interactions of environmental conditions over time is required for the emergence of new properties or products. These emergent products are generated by restructuring lower-level entities or conditions but are not reducible to them, nor are predictable from the lower level (Kim 20-21). Electricity is generated by transforming physical forces and materials into energy, emerging from the interaction of environmental variables like heat and air pressure for example. Alternatively, consider a simple loaf of bread as created by the interaction of flour, a leavening agent like yeast, time, and heat. The ingredients for the bread, like the flour, yeast, sugar, and salt, must be added in a specific order at a specific time in order for the final product to truly become ‘bread’.

Emergence can also be identified in game theory, as cooperation generates a non-zero sum outcome where individuals gain more by working together than if they were working alone (Curry 29). Human economies are founded on this principle of cooperation, as trading goods and services with others theoretically improves the lives individuals working to honour the agreement. From this perspective, it turns out that bronies have identified a fundamental principle of life: friendship is magick because cooperation generates something more from something less. Just as individuals are free to expand upon or reshape the ideas and contributions of others, and groups of individuals are able to combine their expertise to build something new altogether, like the internet. Not only can we establish conceptual connections between past, present, and future, we can connect with each other to expand our understanding of some portion of human culture.

Works Cited

Curry, Oliver Scott. ‘Morality as Cooperation: A Problem-Centred Approach’. The Evolution of Morality, Springer, 2016, pp. 27–51.

Kim, Jaegwon. ‘Making Sense of Emergence’. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 95, no. 1/2, 1999, pp. 3–36.

Implicit Argument for Qualia

Steven Harnad provides an embodied version of the Turing Test (TT) in Other Bodies, Other Minds by using a robot instead of a computer, calling it the Total Turing Test (TTT). He states that to be truly indistinguishable from a human, artificial minds will require the ability to express embodied behaviours in addition to linguistic capacities (Harnad 44). While the TT implicitly assumes language exists independently from the rest of human behaviour (Harnad 45), the TTT avoids problems arising from this assumption by including a behavioural component to the test (Harnad 46). This is due to our tendency to infer other humans have minds despite the fact individuals do not have direct evidence for this belief (Harnad 45). This assumption can be extended to robots as well, where embodied artificial agents which act sufficiently human will be treated as if it had a mind (Harnad 46). Robots which pass the TTT can be said to understand symbols because these symbols have been grounded in non-symbolic structures or bottom-up sensory projections (Harnad 50–51). Therefore, embodiment seems to be necessary for social agents as they will require an understanding of the world and its contents to appear humanlike.

These sensory projections are also known as percepts or qualia (Haikonen 225), and are therefore required for learning language. While Harnad’s intention may have been to avoid discussing metaphysical properties of the mind, for the sake of discussing the TTT, his argument ends up providing support for the ontological structures involved in phenomenal consciousness. Although I didn’t mention it above, he uses this argument to refute Searle’s concerns about the Chinese Room, and the reason he is successful is due to the fact he is identifying an ontological necessity. Robots which pass the TTT will have their own minds because the behaviours which persuade people to believe this is the case are founded on the same processes that produce this capacity in humans.

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Harnad, Stevan. ‘Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem’. Minds and Machines, vol. 1, no. 1, 1991, pp. 43–54.

Subjects as Embodied Minds

Last year I wrote a paper on robot consciousness to submit to a conference, only to realize that there is a better approach to establishing this argument than the one I took. In Sartrean Phenomenology for Humanoid Robots, I attempted to draw a connection between Sartre’s description of self-awareness and how this can be applied to robotics, and while at the time I was more interested in this higher-order understanding of the self, it might be a better idea to start with an argument for phenomenal consciousness. I realized that technically, iCub already has phenomenal consciousness and its own type of qualia, a notion I should develop more before moving on to discuss how we can create intelligent, self-aware robots.

What I originally wanted to convey was how lower levels of consciousness act as a foundation from which higher-order consciousness emerges as the agent grows up in the world, where access consciousness is the result of childhood development. Because this paper is a bit unfocused, I only really talked about this idea in one paragraph when it should be its own paper:

“Sartre’s discussion of the body as being-for-itself is also consistent with the scientific literature on perception and action, and has inspired others to investigate enactivism and embodied cognition in greater detail (Thompson 408; Wider 385; Wilson and Foglia; Zilio 80). This broad philosophical perspective suggests cognition is dependent on features of the agent’s physical body, playing a role in the processing performed by the brain (Wilson and Foglia). Since our awareness tends to surpass our perceptual contents toward acting in response to them (Zilio 80), the body becomes our centre of reference from which the world is experienced (Zilio 79). When Sartre talks about the pen or hammer as an extension of his body, his perspective reflects the way our faculties are able to focus on other aspects of the environment or ourselves as we engage with tools for some purpose. I’d like to suggest that this ability to look past the immediate self can be achieved because we, as subjects, have matured through the sensorimotor stage and have learned to control and coordinate aspects of our bodies. The skills we develop as a result of this sensorimotor learning enables the brain to redirect cognitive resources away from controlling the body to focus primarily on performing mental operations. When we write with a pen, we don’t often think about how to shape each letter or spell each word because we learned how to do this when we were children, allowing us to focus on what we want to say rather than how to communicate it using our body. Thus, the significance of the body for perception and action is further reinforced by evidence from developmental approaches emerging from Piaget’s foundational research.”

Applying this developmental process to iCub isn’t really the exciting idea here, and although robot self-consciousness is cool and all, it’s a bit more unsettling, to me at least, to think about the fact that existing robots of this type technically already feel. They just lack the awareness to know that they are feeling, however, in order to recognize a cup, there is something it is like to see that cup. Do robots think? Not yet, but just as dogs have qualia, so does iCub and Haikonen’s XCR-1 (Law et al. 273; Haikonen 232–33). What are we to make of this?

by Vincenzo Fiorecropped

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, World Scientific Publishing Company, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Law, James, et al. ‘Infants and ICubs: Applying Developmental Psychology to Robot Shaping’. Procedia Computer Science, vol. 7, Jan. 2011, pp. 272–74. ScienceDirect, https://doi.org/10.1016/j.procs.2011.09.034.

Thompson, Evan. ‘Sensorimotor Subjectivity and the Enactive Approach to Experience’. Phenomenology and the Cognitive Sciences, vol. 4, no. 4, Dec. 2005, pp. 407–27. Springer Link, https://doi.org/10.1007/s11097-005-9003-x.

Wider, Kathleen. ‘Sartre, Enactivism, and the Bodily Nature of Pre-Reflective Consciousness’. Pre-Reflective Consciousness, Routledge, 2015.

Wilson, Robert A., and Lucia Foglia. ‘Embodied Cognition’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2017, Metaphysics Research Lab, Stanford University, 2017. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition/.

Zilio, Federico. ‘The Body Surpassed Towards the World and Perception Surpassed Towards Action: A Comparison Between Enactivism and Sartre’s Phenomenology’. Journal of French and Francophone Philosophy, vol. 28, no. 1, 2020, pp. 73–99. PhilPapers, https://doi.org/10.5195/jffp.2020.927.