In a particular branch of systems engineering called developmental robotics, engineers aim to recreate human physiology for the purposes of creating social robots. These architectures learn to generate humanlike behaviours by interacting with other people and objects within its environment. As embodied agents, these robots experience emotions based on inherent motivations associated with childhood, such as curiosity and a need for social interaction. With sufficient experience, the robot’s artificial nervous system will generate an internal representation of the world, providing a foundation for subsequent language acquisition. Researchers are suggesting that further development will eventually enable these robots will become self-aware to some degree, demonstrating an understanding of itself as a social, embodied agent. Given the current trajectory of developmental robotics, we are about to face a new series of moral dilemmas: should technological objects that present behaviours associated with sentience be provided with moral rights? I conclude by briefly introducing Peter Singer’s ideas on suffering and morality to outline potential situations surrounding humans and robots. Arguably, since companies and organizations will possess property rights by law, the robot may be exposed to maltreatment to some extent as a result of its social needs. How should legal systems consider human-robot relations, especially when expressed robot desires conflict with organizational interests? This challenging dilemma is likely to benefit from ample discourse, while also calling for variety of perspectives from differing roles within human societies. Since this issue is rather complicated given conflicting interests, discussions should begin soon.
♦ ♦ ♦
An ongoing debate within philosophy of language indicates there is disagreement around how meaning is derived from linguistic expressions. Some believe semantics are insufficient for understanding linguistic expressions, where pragmatic features of language must be considered as well. As contextualists, these individuals grant that pragmatics serves a fundamental role in determining the meanings of sentences and phrases. While semanticists do not deny the significance of pragmatics, they generally aim to defend the role semantics serves within linguistic communication. Semantic minimalism states that some terms will always have semantic content, regardless of the situational features present during communicative acts. In this paper, I attempt to demonstrate how radical contextualism cannot be true of language in general by appealing to infant learning. A brief discussion on scientific literature from neuroscience and developmental psychology aims to demonstrate the necessity of semantic minimalism for the generation of language and knowledge. I conclude by suggesting that pragmatic components of language can only be learned and applied once individuals have a semantic foundation from which they can use to communicate basic ideas. Thus, a degree of semantic minimalism is crucial for language, indicating that the meanings of words present within expressions cannot always be context-dependent.
♦ ♦ ♦
In this paper, I offer an alternative approach to GOFAI for producing artificial general intelligence, suggesting knowledge and concepts must be constructed from a computer’s point of view rather than through explicit preprogrammed behaviours. Many branches in science including neuroscience and biology can be drawn upon for directing future work on artificial intelligence, especially literature from childhood psychology. I agree with Hubert Dreyfus that until we can recreate the development of the human mind from a bottom-up perspective, efforts to create artificial general intelligence will be limited.