Category: Development

Artifacts

What does it mean to call something an example of “artificial intelligence” (AI)? There are a few different ways to approach this question, one of which includes examining the field to identify an overarching definition or set of themes. Another involves considering the meanings of the words ‘artificial’ and ‘intelligence’, and arguably, doing so enables the expansion of this domain to include new approaches to AI. Ultimately, however, even if these agents one day exhibit sophisticated or intelligent behaviours, they nonetheless continue to exist as artifacts, or objects of creation.

The term artificial intelligence was conceived by computer scientist John McCarthy in 1958, and the purported reason he chose the term was to distinguish it from other domains of study.1 In particular, the field of cybernetics which involves analog or non-digital forms of information processing, and automata theory as a branch of mathematics which studies self-propelling operations.2 Since then, the term ‘artificial intelligence’ has been met with criticism, with some questioning whether it is an appropriate term for the domain. Specifically, Arthur Samuel was not in favour of its connotations, according to computer scientist Pamela McCorduck in her publication on the history of AI.3 She quotes Samuel as stating “The word artificial makes you think there’s something kind of phony about this, or else it sounds like it’s all artificial and there’s nothing real about this work at all.”4

Given the physical distinctions between computers and brains, it is clear that Samuel’s concerns are reasonable, as the “intelligence” exhibited by a computer is simply a mathematical model of biological intelligence. Biological systems, according to Robert Rosen, are anticipatory and thus capable of predicting changes in the environment, enabling individuals to tailor their behaviours to meet the demands of foreseeable outcomes.5 Because biological organisms depend on specific conditions for furthering chances of survival, they evolved ways to detect these changes in the environment and respond accordingly. As species evolved over time, their abilities to detect, process, and respond to information expanded as well, giving rise to intelligence as the capacity to respond appropriately to demanding or unfamiliar situations.6 Though we can simulate intelligence in machines, the use of the word ‘intelligence’ is metaphorical rather than literal. Thus, behaviours exhibit by computers is not real or literal ‘intelligence’ because it arises from an artifact rather than from biological outcomes.

An artifact is defined by Merriam-Webster as an object showing human workmanship or modification, as distinguished from objects found in nature.7 Etymologically, the root of ‘artificial’ is the Latin term artificialis or an object of art, where artificium refers to a work of craft or skill and artifex denotes a craftsman or artist.8 In this context, ‘art’ implies a general sense of creation and applicable to a range of activities including performances as well as material objects. The property of significance is its dependence on human action or intervention: “artifacts are objects intentionally made to serve a given purpose.”9 This is in contrast to unmodified objects found in nature, a distinction first identified by Aristotle in Metaphysics, Nicomachean Ethics, and Physics.10 To be an artifact, the object must satisfy three conditions: it is produced by a mind, involves the modification of materials, and is produced for a purpose. To be an artifact, an object or entity must meet all three criteria.

The first condition states the object must have been created by a mind, and scientific evidence suggests both humans and animals create artifacts.11 For example, beaver dams are considered artifacts because they block rivers to calm the water which creates ideal conditions for a building a lodge.12 Moreover, evidence suggests several early hominid species carved handaxes which serve social purposes as well as practical ones.13 By chipping away at a stone, individuals shape an edge into a blade which can be used for many purposes, including hunting and food preparation.14 Additionally, researchers have suggested that these handaxes may also have played a role in sexual selection, where a symmetrically-shaped handaxe demonstrating careful workmanship indicates a degree of physical or mental fitness.15 Thus, artifacts are important for animals as well as people, indicating the sophisticated abilities involved in the creation of artifacts is not unique to humans.

Computers and robots are also artifacts given that they are highly manufactured, functionally complex, and created for a specific purpose. Any machine or artifact which exhibits complex behaviour may appear to act intelligently, however, the use of ‘intelligent’ is necessarily metaphorical given the distinction between artifacts and living beings. There may one day exists lifelike machines which behave like humans, however, any claims surrounding literal intelligence must demonstrate how and why that is; the burden of proof is theirs to produce. An argument for how a man-made object sufficiently models biological processes is required, and even then, remains a simulation of real systems.

If the growing consensus in cognitive science indicates individuals and their minds are products of interactions between bodily processes, environmental factors, and sociocultural influences, then we should to adjust our approach to AI in response. For robots intending to replicate human physiology, a good first step would be to exchange neural networks made from software for ones built from electrical circuits. The Haikonen Associative Neuron offers a solution to this suggestion,16 and when coupled with the Haikonen Cognitive Architecture, is capable of generating the required physiologicalprocesses for learning about the environment.17 Several videos uploaded to YouTube demonstrate a working prototype of a robot built on these principles, where XCR-1 is able to learn associations between stimuli in its environment, similarly to humans and animals.18 Not only is it a better model of animal physiology than robots relying on computer software, the robot is capable of performing a range of cognitive tasks, including inner speech,19 inner imagery,20 and recognizing itself in a mirror.21

So, it seems that some of Arthur Samuel’s fears have been realized, considering machines merely simulate behaviours and processes identifiable in humans and animals. Moreover, the use of ‘intelligence’ is metaphorical at best, as only biological organisms can display true intelligence. If an aspect of Samuel’s concerns related to securing funding within his niche field of study, and its potential to fall out of fashion, he has no reason to worry. Unfortunately, Samuel passed away in 199022 so he would not have had a chance to see the monstrosity that AI has since become.

Even if these new machines were to become capable of sophisticated behaviours, they will always exist as artifacts, objects of human creation and designed for a specific purpose. The etymological root of the word ‘artificial’ alone provides sufficient grounds for classifying these robots and AIs as objects, however, as they continue to improve, this might become difficult to remember at times. To avoid being deceived by these “phony” behaviours, it will become increasingly important to understand what these intelligent machines are capable of and what they are not.

neuralblender.com


Works Cited

1 Nils J. Nilsson, The Quest for Artificial Intelligence (Cambridge: Cambridge University Press, 2013), 53, https://doi.org/10.1017/CBO9780511819346.

2 Nilsson, 53.

3 Nilsson, 53.

4 Pamela McCorduck, Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, [2nd ed.] (Natick, Massachusetts: AK Peters, 2004), 97; Nilsson, The Quest for Artificial Intelligence, 53.

5 Robert Rosen, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd ed., IFSR International Series on Systems Science and Engineering, 1 (New York: Springer, 2012), 7.

6 ‘Intelligence’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 5 March 2024, https://www.merriam-webster.com/dictionary/intelligence.

7 ‘Artifact’, in Merriam-Webster.Com Dictionary (Merriam-Webster), accessed 17 October 2023, https://www.merriam-webster.com/dictionary/artifact.

8 Douglas Harper, ‘Etymology of Artificial’, in Online Etymology Dictionary, accessed 14 October 2023, https://www.etymonline.com/word/artificial; ‘Artifact’.

9 Lynne Rudder Baker, ‘The Ontology of Artifacts’, Philosophical Explorations 7, no. 2 (1 June 2004): 99, https://doi.org/10.1080/13869790410001694462.

10 Beth Preston, ‘Artifact’, in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman, Winter 2022 (Metaphysics Research Lab, Stanford University, 2022), https://plato.stanford.edu/archives/win2022/entries/artifact/.

11 James L. Gould, ‘Animal Artifacts’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 249.

12 Gould, 262.

13 Steven Mithen, ‘Creations of Pre-Modern Human Minds: Stone Tool Manufacture and Use by Homo Habilis, Heidelbergensis, and Neanderthalensis’, in Creations of the Mind: Theories of Artifacts and Their Representation, ed. Eric Margolis and Stephen Laurence (Oxford, UK: Oxford University Press, 2007), 298.

14 Mithen, 299.

15 Mithen, 300–301.

16 Pentti O Haikonen, Robot Brains: Circuits and Systems for Conscious Machines (John Wiley & Sons, 2007), 19.

17 Pentti O Haikonen, Consciousness and Robot Sentience, 2nd ed., vol. 04, Series on Machine Consciousness (WORLD SCIENTIFIC, 2019), 167, https://doi.org/10.1142/11404.

18 ‘Pentti Haikonen’, YouTube, accessed 6 March 2024, https://www.youtube.com/@PenHaiko.

19 Haikonen, Consciousness and Robot Sentience, 182.

20 Haikonen, 179.

21 Robot Self-Consciousness. XCR-1 Passes the Mirror Test, 2020, https://www.youtube.com/watch?v= WE9QsQqsAdo.

22 John McCarthy and Edward A. Feigenbaum, ‘In Memoriam: Arthur Samuel: Pioneer in Machine Learning’, AI Magazine 11, no. 3 (15 September 1990): 10, https://doi.org/10.1609/aimag.v11i3.840.

Subjects as Embodied Minds

Last year I wrote a paper on robot consciousness to submit to a conference, only to realize that there is a better approach to establishing this argument than the one I took. In Sartrean Phenomenology for Humanoid Robots, I attempted to draw a connection between Sartre’s description of self-awareness and how this can be applied to robotics, and while at the time I was more interested in this higher-order understanding of the self, it might be a better idea to start with an argument for phenomenal consciousness. I realized that technically, iCub already has phenomenal consciousness and its own type of qualia, a notion I should develop more before moving on to discuss how we can create intelligent, self-aware robots.

What I originally wanted to convey was how lower levels of consciousness act as a foundation from which higher-order consciousness emerges as the agent grows up in the world, where access consciousness is the result of childhood development. Because this paper is a bit unfocused, I only really talked about this idea in one paragraph when it should be its own paper:

“Sartre’s discussion of the body as being-for-itself is also consistent with the scientific literature on perception and action, and has inspired others to investigate enactivism and embodied cognition in greater detail (Thompson 408; Wider 385; Wilson and Foglia; Zilio 80). This broad philosophical perspective suggests cognition is dependent on features of the agent’s physical body, playing a role in the processing performed by the brain (Wilson and Foglia). Since our awareness tends to surpass our perceptual contents toward acting in response to them (Zilio 80), the body becomes our centre of reference from which the world is experienced (Zilio 79). When Sartre talks about the pen or hammer as an extension of his body, his perspective reflects the way our faculties are able to focus on other aspects of the environment or ourselves as we engage with tools for some purpose. I’d like to suggest that this ability to look past the immediate self can be achieved because we, as subjects, have matured through the sensorimotor stage and have learned to control and coordinate aspects of our bodies. The skills we develop as a result of this sensorimotor learning enables the brain to redirect cognitive resources away from controlling the body to focus primarily on performing mental operations. When we write with a pen, we don’t often think about how to shape each letter or spell each word because we learned how to do this when we were children, allowing us to focus on what we want to say rather than how to communicate it using our body. Thus, the significance of the body for perception and action is further reinforced by evidence from developmental approaches emerging from Piaget’s foundational research.”

Applying this developmental process to iCub isn’t really the exciting idea here, and although robot self-consciousness is cool and all, it’s a bit more unsettling, to me at least, to think about the fact that existing robots of this type technically already feel. They just lack the awareness to know that they are feeling, however, in order to recognize a cup, there is something it is like to see that cup. Do robots think? Not yet, but just as dogs have qualia, so does iCub and Haikonen’s XCR-1 (Law et al. 273; Haikonen 232–33). What are we to make of this?

by Vincenzo Fiorecropped

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, World Scientific Publishing Company, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Law, James, et al. ‘Infants and ICubs: Applying Developmental Psychology to Robot Shaping’. Procedia Computer Science, vol. 7, Jan. 2011, pp. 272–74. ScienceDirect, https://doi.org/10.1016/j.procs.2011.09.034.

Thompson, Evan. ‘Sensorimotor Subjectivity and the Enactive Approach to Experience’. Phenomenology and the Cognitive Sciences, vol. 4, no. 4, Dec. 2005, pp. 407–27. Springer Link, https://doi.org/10.1007/s11097-005-9003-x.

Wider, Kathleen. ‘Sartre, Enactivism, and the Bodily Nature of Pre-Reflective Consciousness’. Pre-Reflective Consciousness, Routledge, 2015.

Wilson, Robert A., and Lucia Foglia. ‘Embodied Cognition’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2017, Metaphysics Research Lab, Stanford University, 2017. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition/.

Zilio, Federico. ‘The Body Surpassed Towards the World and Perception Surpassed Towards Action: A Comparison Between Enactivism and Sartre’s Phenomenology’. Journal of French and Francophone Philosophy, vol. 28, no. 1, 2020, pp. 73–99. PhilPapers, https://doi.org/10.5195/jffp.2020.927.

Artificial Consciousness

With fewer courses this term, I’ve had a lot more time to work on the topic I’d like to pursue for my doctoral research, and as a result, have found the authors I need to start writing papers. This is very exciting because existing literature suggests we have a decent answer to the Chalmer’s Hard Problem, and from a nonreductive functionalist perspective, can fill in the metaphysical picture required for producing an account of phenomenal experiences (Feinberg and Mallatt; Solms; Tsou). This means we are justified in considering artificial consciousness as a serious possibility, enabling us to start discussions on what we should be doing about it. I’m currently working on papers that address the hard problem and qualia, arguing that information is the puzzle piece we are looking for.

Individuals have suggested that consciousness is virtual, similarly to computer software running on hardware (Bruiger; Haikonen; Lehar; Orpwood) Using this idea, we can posit that social robots can become conscious like humans, as the functional architectures of both rely on incoming information to construct an understanding of things, people, and itself. My research contributes to this perspective by stressing the significance of social interactions for developing conscious machines. Much of the engineering and philosophical literature focuses on internal architectures for cognition, but what seems to be missing is just how crucial other people are for the development of conscious minds. Preprocessed information in the form of knowledge is crucial for creating minds, as seen in developmental psychology literature. Children are taught labels for things they interact with, and by linguistically engaging with others about the world, they become able to express themselves as subjects with needs and desires. Therefore, meaning is generated for individuals by learning from others, contributing to the formation of conscious subjects.

Moreover, if we can discuss concepts from phenomenology in terms of the interplay of physiological functioning and information-processing, it seems reasonable to suggest that we have resolved the problems plaguing consciousness studies. Acting as an interface between first-person perspectives and a third-person perspective, information accounts for the contents, origins, and attributes of various conscious states. Though an exact mapping between disciplines may not be possible, some general ideas or common notions might be sufficiently explained by drawing connections between the two perspectives.

Works Cited

Bruiger, Dan. How the Brain Makes Up the Mind: A Heuristic Approach to the Hard Problem of Consciousness. June 2018.

Chalmers, David. ‘Facing Up to the Problem of Consciousness’. Journal of Consciousness Studies, vol. 2, no. 3, Mar. 1995, pp. 200–19. ResearchGate, doi:10.1093/acprof:oso/9780195311105.003.0001.

Feinberg, Todd E., and Jon Mallatt. ‘Phenomenal Consciousness and Emergence: Eliminating the Explanatory Gap’. Frontiers in Psychology, vol. 11, Frontiers, 2020. Frontiers, doi:10.3389/fpsyg.2020.01041.

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), doi:10.1142/11404.

Lehar, Steven. The World in Your Head: A Gestalt View of the Mechanism of Conscious Experience. Lawrence Erlbaum, 2003.

Orpwood, Roger. ‘Information and the Origin of Qualia’. Frontiers in Systems Neuroscience, vol. 11, Frontiers, 2017, p. 22.

Solms, Mark. ‘A Neuropsychoanalytical Approach to the Hard Problem of Consciousness’. Journal of Integrative Neuroscience, vol. 13, no. 02, Imperial College Press, June 2014, pp. 173–85. worldscientific.com (Atypon), doi:10.1142/S0219635214400032.

Tsou, Jonathan Y. ‘Origins of the Qualitative Aspects of Consciousness: Evolutionary Answers to Chalmers’ Hard Problem’. Origins of Mind, edited by Liz Swan, Springer Netherlands, 2013, pp. 259–69. Springer Link, doi:10.1007/978-94-007-5419-5_13.

Horty’s Defaults with Priorities for Artificial Moral Agents

Can we build robots that act morally? Wallach and Allen’s book titled Moral Machines investigates a variety of approaches for creating artificial moral agents (AMAs) capable of making appropriate ethical decisions, one of which I find somewhat interesting. On page 34, they briefly mention “deontic logic” which is a version of modal logic that uses concepts of obligation and permission rather than necessity and possibility. This formal system is able derive conclusions about how one ought to act given certain conditions; for example, if one is permitted to perform some act α, then it follows that they are under no obligation to not do, or avoid doing, that act α. Problems arise, however, when agents are faced with conflicting obligations (McNamara). For example, if Bill sets an appointment for noon he is obligated to arrive at the appropriate time, and if Bill’s child were to suddenly have a medical emergency ten minutes prior, in that moment he would be faced with conflicting obligations. Though the right course of action may be fairly obvious in this case, the problem itself still requires some consideration before a decision can be reached. One way to approach this dilemma is to create a framework which is capable of overriding specific commitments if warranted by the situation. As such, John Horty’s Defaults with Priorities may be useful for developing AMAs as it enables the agent to adjust its behaviour based on contextual information.

Roughly speaking, a default rule can be considered similarly to a logical implication, where some antecedent A leads to some consequent B if A obtains. A fairly straightforward example of a default rule may suggest that if a robot detects an obstacle, it must retreat and reorient itself in an effort to avoid the obstruction. There may be cases, however, where this action is not ideal, suggesting the robot needs a way to dynamically switch behaviours based on the type of object it runs into. Horty’s approach suggests that by adding a defeating set with conflicting rules, the default implication can be essentially cancelled out and new conclusions can be derived about a scenario S (373). Unfortunately, the example Horty uses to demonstrate this move stipulates that Tweety bird is a penguin, and it seems the reason for this is merely to show how adding rules leads to the nullification of the default implication. I will attempt to capture the essence of Horty’s awkward example by replacing ‘Tweety’ with ‘Pingu’ as it saves the reader cognitive energy. Let’s suppose then that we can program a robot to conclude that, by default, birds fly (B→F). If the robot also knew that penguins are birds which do not fly (P→B ˄ P→¬F), it would be able to determine that Pingu is a bird that does not fly based on the defeating set. According to Horty, this process can be considered similarly to acts of justification where individuals provide reasons for their beliefs, an idea I thought would be pertinent for AMAs. Operational constraints aside, systems could provide log files detailing the rules and information used when making decisions surrounding some situation S. Moreover, rather than hard-coding rules and information, machine learning may be able to provide the algorithm with the inferences it needs to respond appropriately to environmental stimuli. Using simpler examples than categories of birds and their attributes, it seems feasible that we could test this approach to determine whether it may be useful for building AMAs one day.

Now, I have a feeling that this approach is neither special nor unique within logic and computer science more generally, but for some reason the thought of framing robot actions from the perspective of deontic logic seems like it might be useful somehow. Maybe it’s due to the way deontic terminology is applied to modal logic, acting like an interface between moral theory and computer code. I just found the connection to be thought-provoking, and after reading Horty’s paper, began wondering whether approaches like these may be useful for developing systems that are capable of justifying their actions by listing the reasons used within the decision-making process.

Works Cited

Horty, John. “Defaults with priorities.” Journal of Philosophical Logic 36.4 (2007): 367-413.

McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/logic-deontic/>.

AI and the Responsibility Gap

This week we are talking about the responsibility gap that arises from deep learning systems. We read Mind the gap: responsible robotics and the problem of responsibility by David Gunkel along with Andreas Matthias’ article The responsibility gap: Ascribing responsibility for the actions of learning automata.

It seems the mixture of excitement and fear surrounding the rise of autonomous agents may be the result of challenges to our intuitions on the distinction between objects and subjects. This new philosophical realm can be analyzed from a theoretical level, involving ontological and epistemological questions, but these issues can also be examined through a practical lens as well. Considering there may be a substantial amount of debate on the ontological status of various robots and AIs, it might be helpful to consider issues on morality and responsibility as separate to the theoretical questions, at least for now. The reason for this differentiation is to remain focused on protecting users and consumers as new applications of deep learning continue to modify our ontological foundations and daily life. Although legislative details will depend on the answers to theoretical questions to some degree, there may be existing approaches to determining responsibility that can be altered and adopted. Just as research and development firms are responsible for the outcomes of their products and testing procedures (Gunkel 12), AI companies too will likely shoulder the responsibility for unintended and unpredictable side-effects of their endeavours. The degree to which the organization can accurately determine the responsible individual(s) or components will be less straightforward than it may have been historically, but this is due to the complexity of the tools we are currently developing. We are no longer mere labourers using tools for improved efficiency (Gunkel 2); humans are generating technologies which are on the verge of possessing capacities for subjectivity. Even today, the relationship between a DCNN and its creators seems to have more in common with a child-parent relationship than an object-subject relationship. This implies companies are responsible for their products even when they misbehave, as the debacle surrounding Tay.ai demonstrates (Gunkel 5). It won’t be long, however, before we outgrow these concepts and our laws and regulations are challenged yet again. In spite of this, it is not in our best interest to wait until theoretical questions are answered before drafting policies aimed at protecting the public.

Works Cited

Gunkel, David J. “Mind the gap: responsible robotics and the problem of responsibility.” Ethics and Information Technology (2017): 1-14.

Is Opacity a Fundamental Property of Complex Systems?

While operational opacity generated by machine learning algorithms presents a wide range of problems for ethics and computer science (Burrell 10), one type in particular may be unavoidable due to the nature of complex processes. The physical underpinnings of functional systems may be difficult to understand because of the way data is stored and transmitted. Just as patterns of neural activity seem conceptually distant from first-person accounts of subjective experiences, the missing explanation for why or how a DCNN arrives at a particular decision may actually be a feature of the system rather than a bug. Systems capable of storing or processing large amounts of data may only be capable of doing so because of the way nested relationships are embedded in the structure. Furthermore, many of the human behaviours or capacities researchers are trying to understand and copy are both complex and emergent, making them difficult to fully trace back to the physical level of implementation. When we do, it often looks strange and quite chaotic. For example, molecular genetics suggests various combinations of nucleotides give rise to different types of cells and proteins, each with highly specialized and synergistic functions. Additionally, complex phenotypes like disease dispositions are typically the result of many interacting genotypic factors in conjunction with the presence of certain environmental variables. If it turns out to be the case that a degree of opacity is a necessary component of convoluted functionality, we may need to rethink our expectations of how ethics can inform the future of AI development.

Works Cited

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016).

Programming Emotions

Last summer, I was introduced to the world of hobby robotics and began building an obstacle-avoidance bot as a way to learn the basics. Once classes started last September, all projects were set aside until I graduated, allowing me to focus on school. Now that I have free time, I’ve been thinking about what kind of robot to build next. It will probably still have wheels and an ultrasonic sensor, but I want it to behave based on its internal environment as well as its external environment. Not only will it detect objects in its path, but it will also move about based on its mood or current emotional state. For example, if it were to be afraid of loud noises, it would go to “hide” against a nearby object. This specific functionality would require the robot have a microphone to detect sounds, and is something I have been thinking of adding. Otherwise, the only input the robot has is object-detection, and producing or calculating emotions based on the frequency of things in its path is kind of boring. I have also been interested in operationalizing, codifying, and programming emotions for quite a while now, and this project would be a great place to start.

One helpful theory I came across is the Three-Factor Theory (3FT) developed by Mehrabian and Russell in 1974 (Russell and Mehrabian 274). It describes emotions as ranging through a three-dimensional space consisting of values for pleasure, arousal, and dominance. For example, a state of anger is associated with -.68 for pleasure, +.22 for arousal, and +.10 for dominance (Russell and Mehrabian 277). After mulling on these averages for a second, I feel these are fairly reflective of general human nature, but let’s not forget these values are dependent on personality and contextual factors too. However, the notion of ‘dominance’ doesn’t feel quite right, and I wonder if a better paradigm could take its place. Personally, the idea of being dominant or submissive is quite similar to the approach/avoidance dichotomy used in areas of biology and psychology. ‘Dominance’ is inherently tied to social situations, and a broader theory of emotion must account for non-social circumstances as well. The compelling argument from the approach/avoidance model centers around hedonism, motivation, and goal acquisition; if a stimulus is pleasurable or beneficial, individuals are motivated to seek it out, while undesirable or dangerous stimuli are avoided in order to protect oneself (Elliot 171). Furthermore, this also works well with the Appraisal Theory of emotion, as it argues that affective states indicate an individual’s needs or goals (Scherer 638). Therefore, I will be using a value range based on approach/avoidance rather than dominance. While human emotions tend to involve much more than a simple judgement about a situation, the Appraisal Theory should suffice for a basic robot. One last modification I would like to make in my version of the 3FT is changing ‘pleasure’ to ‘valence’. This is merely to reflect the style of language used in current psychological literature, where positive values are associated with pleasure and negative values are associated with displeasure. I also like this because robots don’t feel pleasure (yet?) but they are capable of responding based on “good” and “bad” types of stimuli. ‘Arousal’ is perfectly fine as it is, as it reflects how energetic or excited the individual is. For example, being startled results in high arousal due to the relationship between the amygdala, hypothalamus, and other local and distal regions in the body, which typically prepare the individual to run or fight (Pinel 453-454).

To summarize, the three factors I will be using are valence, arousal, and approach/avoidance. As much as I would love to find a term to replace ‘approach/avoidance’, for the sake of a nice acronym, I have yet to find one which encapsulates the true nature of the phenomenon. Anyway, this modified 3FT seems to be a good start for developing emotional states in a simple robot, especially if it only receives a narrow range of sensory input and does not perform any other sophisticated behaviours. While this robot will possess internal states, it won’t be able to reflect upon them nor have any degree of control over them. Heck, I won’t even be using any type of AI algorithms in this version. So if anyone is spooked by a robot who feels, just know that it won’t be able to take over the world.

Works Cited

Elliot, Andrew J. “Approach and avoidance motivation and achievement goals.” Educational psychologist 34.3 (1999): 169-189.

Pinel, John PJ. Biopsychology. Boston, MA: Pearson, 2011.

Russell, James A., and Albert Mehrabian. “Evidence for a three-factor theory of emotions.” Journal of research in Personality 11.3 (1977): 273-294.

Scherer, Klaus R. “Appraisal theory.” Handbook of cognition and emotion (1999): 637-663.