Papers

Winter 2024
Abstract
Currently, approaches to AI involve the use of computer code, even in developmental robots like iCub. As a result, it implements a model of a physical nervous system which consists of an abstraction biological processes, generating a limitation in the kinds of behaviours it can perform. To demonstrate this, I explore a discussion developed by theoretical biologist Robert Rosen which appeals to Kurt Gödel’s Incompleteness Theorem. Rosen describes why it is that meaning and semantics cannot be fully expressed by entailment structures or syntax, as there will exist information from semantics that cannot be sufficiently represented by syntactical structures. This is because biological functions responsible for producing semantic meaning cannot be fully recreated in formal systems like computer code. This presets limitations on the kinds of behaviours expressed by various AI systems, including iCub. While some behaviours may be simulable, others cannot be replicated in computerized agents given their inability to interpret the meanings humans use to communicate and socialize. Empathy requires one to adopt the perspective of another, and computerized robots cannot accomplish this because of their lack of semantic understanding. There is no way to ascertain what a human could be experiencing because the robot does not have access to semantic information used by humans. Although iCub may be able to express a simulacra of emotions like sadness, the formal models it uses cannot fully represent semantic information. This presentation will explain Rosen’s distinction between natural systems and formal systems to illustrate the physical limitations of computerized robots like iCub to establish why it is not capable of genuine empathy.


Autumn 2021
Abstract
Daniel Dennett provides many compelling reasons to question the existence of phenomenal experiences in his paper titled Quining Qualia, however, from the perspective of the individual, qualia appear to be an inherent feature of consciousness. The act of reflecting on one’s experiences suggests that subjective feelings and sensations are a necessary element of human life, as personal opinions on various artistic works are apt to demonstrate. This paper argues that by considering subjective experiences from a naturalized functionalist perspective, a comprehensive explanation for qualia can be provided given its origins in evolutionary biology. As information passing through the nervous system, qualia serve to guide the behaviour of individuals to ultimately facilitate survival. Specifically, qualia are representations of environmental features, existing as information messages supported by neural physiology and encoded in electrochemical formats. In addition to addressing the Hard Problem of Consciousness and clarifying the four properties Dennett associates with qualia, this theoretical foundation enables further metaphysical discussion on the nature of consciousness more generally. Although many outstanding questions on the contents of subjective experiences are apt to linger given the explanatory gap, a robust theory for the existence of qualia can be developed through the integration of ideas and concepts from a variety of domains.


Autumn 2020
Abstract
In a particular branch of systems engineering called developmental robotics, engineers aim to recreate human physiology for the purposes of creating social robots. These architectures learn to generate humanlike behaviours by interacting with other people and objects within its environment. As embodied agents, these robots experience emotions based on inherent motivations associated with childhood, such as curiosity and a need for social interaction. With sufficient experience, the robot’s artificial nervous system will generate an internal representation of the world, providing a foundation for subsequent language acquisition. Researchers are suggesting that further development will eventually enable these robots will become self-aware to some degree, demonstrating an understanding of itself as a social, embodied agent. Given the current trajectory of developmental robotics, we are about to face a new series of moral dilemmas: should technological objects that present behaviours associated with sentience be provided with moral rights? I conclude by briefly introducing Peter Singer’s ideas on suffering and morality to outline potential situations surrounding humans and robots. Arguably, since companies and organizations will possess property rights by law, the robot may be exposed to maltreatment to some extent as a result of its social needs. How should legal systems consider human-robot relations, especially when expressed robot desires conflict with organizational interests? This challenging dilemma is likely to benefit from ample discourse, while also calling for variety of perspectives from differing roles within human societies. Since this issue is rather complicated given conflicting interests, discussions should begin soon.


Winter 2020
Abstract
An ongoing debate within philosophy of language indicates there is disagreement around how meaning is derived from linguistic expressions. Some believe semantics are insufficient for understanding linguistic expressions, where pragmatic features of language must be considered as well. As contextualists, these individuals grant that pragmatics serves a fundamental role in determining the meanings of sentences and phrases. While semanticists do not deny the significance of pragmatics, they generally aim to defend the role semantics serves within linguistic communication. Semantic minimalism states that some terms will always have semantic content, regardless of the situational features present during communicative acts. In this paper, I attempt to demonstrate how radical contextualism cannot be true of language in general by appealing to infant learning. A brief discussion on scientific literature from neuroscience and developmental psychology aims to demonstrate the necessity of semantic minimalism for the generation of language and knowledge. I conclude by suggesting that pragmatic components of language can only be learned and applied once individuals have a semantic foundation from which they can use to communicate basic ideas. Thus, a degree of semantic minimalism is crucial for language, indicating that the meanings of words present within expressions cannot always be context-dependent.


Spring 2018
Abstract
In this paper, I offer an alternative approach to GOFAI for producing artificial general intelligence, suggesting knowledge and concepts must be constructed from a computer’s point of view rather than through explicit preprogrammed behaviours. Many branches in science including neuroscience and biology can be drawn upon for directing future work on artificial intelligence, especially literature from childhood psychology. I agree with Hubert Dreyfus that until we can recreate the development of the human mind from a bottom-up perspective, efforts to create artificial general intelligence will be limited.