Category: Development

AI and the Responsibility Gap

This week we are talking about the responsibility gap that arises from deep learning systems. We read Mind the gap: responsible robotics and the problem of responsibility by David Gunkel along with Andreas Matthias’ article The responsibility gap: Ascribing responsibility for the actions of learning automata.

It seems the mixture of excitement and fear surrounding the rise of autonomous agents may be the result of challenges to our intuitions on the distinction between objects and subjects. This new philosophical realm can be analyzed from a theoretical level, involving ontological and epistemological questions, but these issues can also be examined through a practical lens as well. Considering there may be a substantial amount of debate on the ontological status of various robots and AIs, it might be helpful to consider issues on morality and responsibility as separate to the theoretical questions, at least for now. The reason for this differentiation is to remain focused on protecting users and consumers as new applications of deep learning continue to modify our ontological foundations and daily life. Although legislative details will depend on the answers to theoretical questions to some degree, there may be existing approaches to determining responsibility that can be altered and adopted. Just as research and development firms are responsible for the outcomes of their products and testing procedures (Gunkel 12), AI companies too will likely shoulder the responsibility for unintended and unpredictable side-effects of their endeavours. The degree to which the organization can accurately determine the responsible individual(s) or components will be less straightforward than it may have been historically, but this is due to the complexity of the tools we are currently developing. We are no longer mere labourers using tools for improved efficiency (Gunkel 2); humans are generating technologies which are on the verge of possessing capacities for subjectivity. Even today, the relationship between a DCNN and its creators seems to have more in common with a child-parent relationship than an object-subject relationship. This implies companies are responsible for their products even when they misbehave, as the debacle surrounding Tay.ai demonstrates (Gunkel 5). It won’t be long, however, before we outgrow these concepts and our laws and regulations are challenged yet again. In spite of this, it is not in our best interest to wait until theoretical questions are answered before drafting policies aimed at protecting the public.

Works Cited

Gunkel, David J. “Mind the gap: responsible robotics and the problem of responsibility.” Ethics and Information Technology (2017): 1-14.

Is Opacity a Fundamental Property of Complex Systems?

While operational opacity generated by machine learning algorithms presents a wide range of problems for ethics and computer science (Burrell 10), one type in particular may be unavoidable due to the nature of complex processes. The physical underpinnings of functional systems may be difficult to understand because of the way data is stored and transmitted. Just as patterns of neural activity seem conceptually distant from first-person accounts of subjective experiences, the missing explanation for why or how a DCNN arrives at a particular decision may actually be a feature of the system rather than a bug. Systems capable of storing or processing large amounts of data may only be capable of doing so because of the way nested relationships are embedded in the structure. Furthermore, many of the human behaviours or capacities researchers are trying to understand and copy are both complex and emergent, making them difficult to fully trace back to the physical level of implementation. When we do, it often looks strange and quite chaotic. For example, molecular genetics suggests various combinations of nucleotides give rise to different types of cells and proteins, each with highly specialized and synergistic functions. Additionally, complex phenotypes like disease dispositions are typically the result of many interacting genotypic factors in conjunction with the presence of certain environmental variables. If it turns out to be the case that a degree of opacity is a necessary component of convoluted functionality, we may need to rethink our expectations of how ethics can inform the future of AI development.

Works Cited

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016).

Programming Emotions

Last summer, I was introduced to the world of hobby robotics and began building an obstacle-avoidance bot as a way to learn the basics. Once classes started last September, all projects were set aside until I graduated, allowing me to focus on school. Now that I have free time, I’ve been thinking about what kind of robot to build next. It will probably still have wheels and an ultrasonic sensor, but I want it to behave based on its internal environment as well as its external environment. Not only will it detect objects in its path, but it will also move about based on its mood or current emotional state. For example, if it were to be afraid of loud noises, it would go to “hide” against a nearby object. This specific functionality would require the robot have a microphone to detect sounds, and is something I have been thinking of adding. Otherwise, the only input the robot has is object-detection, and producing or calculating emotions based on the frequency of things in its path is kind of boring. I have also been interested in operationalizing, codifying, and programming emotions for quite a while now, and this project would be a great place to start.

One helpful theory I came across is the Three-Factor Theory (3FT) developed by Mehrabian and Russell in 1974 (Russell and Mehrabian 274). It describes emotions as ranging through a three-dimensional space consisting of values for pleasure, arousal, and dominance. For example, a state of anger is associated with -.68 for pleasure, +.22 for arousal, and +.10 for dominance (Russell and Mehrabian 277). After mulling on these averages for a second, I feel these are fairly reflective of general human nature, but let’s not forget these values are dependent on personality and contextual factors too. However, the notion of ‘dominance’ doesn’t feel quite right, and I wonder if a better paradigm could take its place. Personally, the idea of being dominant or submissive is quite similar to the approach/avoidance dichotomy used in areas of biology and psychology. ‘Dominance’ is inherently tied to social situations, and a broader theory of emotion must account for non-social circumstances as well. The compelling argument from the approach/avoidance model centers around hedonism, motivation, and goal acquisition; if a stimulus is pleasurable or beneficial, individuals are motivated to seek it out, while undesirable or dangerous stimuli are avoided in order to protect oneself (Elliot 171). Furthermore, this also works well with the Appraisal Theory of emotion, as it argues that affective states indicate an individual’s needs or goals (Scherer 638). Therefore, I will be using a value range based on approach/avoidance rather than dominance. While human emotions tend to involve much more than a simple judgement about a situation, the Appraisal Theory should suffice for a basic robot. One last modification I would like to make in my version of the 3FT is changing ‘pleasure’ to ‘valence’. This is merely to reflect the style of language used in current psychological literature, where positive values are associated with pleasure and negative values are associated with displeasure. I also like this because robots don’t feel pleasure (yet?) but they are capable of responding based on “good” and “bad” types of stimuli. ‘Arousal’ is perfectly fine as it is, as it reflects how energetic or excited the individual is. For example, being startled results in high arousal due to the relationship between the amygdala, hypothalamus, and other local and distal regions in the body, which typically prepare the individual to run or fight (Pinel 453-454).

To summarize, the three factors I will be using are valence, arousal, and approach/avoidance. As much as I would love to find a term to replace ‘approach/avoidance’, for the sake of a nice acronym, I have yet to find one which encapsulates the true nature of the phenomenon. Anyway, this modified 3FT seems to be a good start for developing emotional states in a simple robot, especially if it only receives a narrow range of sensory input and does not perform any other sophisticated behaviours. While this robot will possess internal states, it won’t be able to reflect upon them nor have any degree of control over them. Heck, I won’t even be using any type of AI algorithms in this version. So if anyone is spooked by a robot who feels, just know that it won’t be able to take over the world.

Works Cited

Elliot, Andrew J. “Approach and avoidance motivation and achievement goals.” Educational psychologist 34.3 (1999): 169-189.

Pinel, John PJ. Biopsychology. Boston, MA: Pearson, 2011.

Russell, James A., and Albert Mehrabian. “Evidence for a three-factor theory of emotions.” Journal of research in Personality 11.3 (1977): 273-294.

Scherer, Klaus R. “Appraisal theory.” Handbook of cognition and emotion (1999): 637-663.