Papers

Summer 2021
Abstract
The phenomenological ontology presented by Jean-Paul Sartre in Being and Nothingness has been monumental for the study of human experience, and some of these ideas may be useful for developing a conceptual foundation for robot phenomenology. Engineers interested in creating social, humanoid robots have been modelling machine architectures on childhood development, allowing robots to learn from experience by interacting with elements in their environment. Some believe that with time, these robots will likely achieve self-awareness and a capacity for self-reflection. How might robot experiences differ from human experiences? While our biological heritage may make it difficult for robots to relate to human feelings, some experiences may be shared between robots and humans. Visual, auditory, and kinaesthetic experiences may contain similarities if robot architecture sufficiently models human physiology, however, our understanding will be inherently limited until they can tell us what it is like to be a robot.

Summer 2021
Abstract
The paper by Daniel Dennett titled Quining Qualia provides many compelling reasons to believe that traditional notions surrounding qualia or phenomenal experiences are worth reconsidering; however, those who disagree have been turning to other disciplines for new ideas. While qualia may appear to be confusing and nonsensical at times, by investigating domains which study or make use of phenomenal experiences, such as art and music, a framework for further discussion begins to emerge. This paper argues that by considering subjective experiences from a naturalistic perspective, partial answers or explanations for lingering questions about qualia and consciousness can be identified. Studying the evolutionary origins of the human brain indicates qualia exist as information representing some aspect of the environment, generated by sensory processing for the purposes of governing behaviour. With this theoretical foundation, it can be suggested that a reduced degree of ineffability is possible for intelligent species like humans, given our ability to use language and other forms of depiction to express subjective experiences. Although many outstanding questions are apt to linger given the explanatory gap, it seems we now have a framework which can begin to clarify the mystery surrounding phenomenal experiences.

Autumn 2020
Abstract
In a particular branch of systems engineering called developmental robotics, engineers aim to recreate human physiology for the purposes of creating social robots. These architectures learn to generate humanlike behaviours by interacting with other people and objects within its environment. As embodied agents, these robots experience emotions based on inherent motivations associated with childhood, such as curiosity and a need for social interaction. With sufficient experience, the robot’s artificial nervous system will generate an internal representation of the world, providing a foundation for subsequent language acquisition. Researchers are suggesting that further development will eventually enable these robots will become self-aware to some degree, demonstrating an understanding of itself as a social, embodied agent. Given the current trajectory of developmental robotics, we are about to face a new series of moral dilemmas: should technological objects that present behaviours associated with sentience be provided with moral rights? I conclude by briefly introducing Peter Singer’s ideas on suffering and morality to outline potential situations surrounding humans and robots. Arguably, since companies and organizations will possess property rights by law, the robot may be exposed to maltreatment to some extent as a result of its social needs. How should legal systems consider human-robot relations, especially when expressed robot desires conflict with organizational interests? This challenging dilemma is likely to benefit from ample discourse, while also calling for variety of perspectives from differing roles within human societies. Since this issue is rather complicated given conflicting interests, discussions should begin soon.

Winter 2020
Abstract
An ongoing debate within philosophy of language indicates there is disagreement around how meaning is derived from linguistic expressions. Some believe semantics are insufficient for understanding linguistic expressions, where pragmatic features of language must be considered as well. As contextualists, these individuals grant that pragmatics serves a fundamental role in determining the meanings of sentences and phrases. While semanticists do not deny the significance of pragmatics, they generally aim to defend the role semantics serves within linguistic communication. Semantic minimalism states that some terms will always have semantic content, regardless of the situational features present during communicative acts. In this paper, I attempt to demonstrate how radical contextualism cannot be true of language in general by appealing to infant learning. A brief discussion on scientific literature from neuroscience and developmental psychology aims to demonstrate the necessity of semantic minimalism for the generation of language and knowledge. I conclude by suggesting that pragmatic components of language can only be learned and applied once individuals have a semantic foundation from which they can use to communicate basic ideas. Thus, a degree of semantic minimalism is crucial for language, indicating that the meanings of words present within expressions cannot always be context-dependent.

Spring 2018
Abstract
In this paper, I offer an alternative approach to GOFAI for producing artificial general intelligence, suggesting knowledge and concepts must be constructed from a computer’s point of view rather than through explicit preprogrammed behaviours. Many branches in science including neuroscience and biology can be drawn upon for directing future work on artificial intelligence, especially literature from childhood psychology. I agree with Hubert Dreyfus that until we can recreate the development of the human mind from a bottom-up perspective, efforts to create artificial general intelligence will be limited.