Category: Technology

Algorithmic Transparency and Social Power

This term I’m taking the course Science and Ethics, and this week we read Langdon Winner’s 1980 article “Do Artifacts have Politics?” along with a paper from 2016 published by Brent Daniel Mittelstadt and colleagues titled “The ethics of algorithms: Mapping the debate.” We are encouraged to do weekly responses, and considering the concerning nature of what these articles are discussing, thought it should be presented here. There is definitely a lot that could be expanded upon, which I might consider doing at a later time.

Overall, the two articles suggested risks of discriminatory outcomes are an aspect of technological advancements, especially when power imbalances are present or inherent. The paper The ethics of algorithms: Mapping the debate focused particularly on algorithmic design and its current lack of transparency (Mittelstadt 6). The authors mention how this is an epistemic concern, as developers are unable to determine how a decision is reached, which leads to normative problems. Algorithmic outcomes potentially generate discriminatory practices which may generalize and treat groups of people erroneously (Mittelstadt 5). Thus, given the elusive epistemic nature of current algorithmic design, individuals throughout the entire organization can truthfully claim ignorance of their own business practices. Some may take advantage of this fact. Today, corporations that manage to successfully integrate their software into the daily life of many millions of users have little incentive to change, due to shareholder desires for financial growth. Until the system which implicitly suggests companies can simply pay a fee, in the form of legal settlements outside of court, to act unethically, this problem is likely to continue to manifest. This indeed does not inspire confidence for the future of AI as we hand over our personal information to companies and governments (Mittelstadt 6).

Langdon Winner’s on whether artifacts have politics provides a compelling argument for the inherently political nature of our technological objects. While this paper may have been published in 1980, its wisdom and relevance can be readily applied to contemporary contexts. Internet memes even pick up on this parallel; one example poses as a message from Microsoft stating those who program open-source software are communists. While roles of leadership are required for many projects or organizations (Winner 130), inherently political technologies have the hierarchy of social functioning as part of their conceptual foundations, according to Winner (133). The point the author aims to stress surrounds technological effects which impede social functioning (Winner 131), a direction we have yet to move away from considering the events leading up to and following the 2016 American presidential election. If we don’t strive for better epistemic and normative transparency, we will be met with authoritarian outcomes. As neural networks continue to creep into various sectors of society, like law, healthcare, and education, ensuring the protection of individual rights remains at risk.

Works Cited

Mittelstadt, Brent Daniel, et al. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3.2 (2016): 1-21.

Winner, Langdon. “Do artifacts have politics?.” Daedalus 109.1 (1980): 121-36.

Programming Emotions

Last summer, I was introduced to the world of hobby robotics and began building an obstacle-avoidance bot as a way to learn the basics. Once classes started last September, all projects were set aside until I graduated, allowing me to focus on school. Now that I have free time, I’ve been thinking about what kind of robot to build next. It will probably still have wheels and an ultrasonic sensor, but I want it to behave based on its internal environment as well as its external environment. Not only will it detect objects in its path, but it will also move about based on its mood or current emotional state. For example, if it were to be afraid of loud noises, it would go to “hide” against a nearby object. This specific functionality would require the robot have a microphone to detect sounds, and is something I have been thinking of adding. Otherwise, the only input the robot has is object-detection, and producing or calculating emotions based on the frequency of things in its path is kind of boring. I have also been interested in operationalizing, codifying, and programming emotions for quite a while now, and this project would be a great place to start.

One helpful theory I came across is the Three-Factor Theory (3FT) developed by Mehrabian and Russell in 1974 (Russell and Mehrabian 274). It describes emotions as ranging through a three-dimensional space consisting of values for pleasure, arousal, and dominance. For example, a state of anger is associated with -.68 for pleasure, +.22 for arousal, and +.10 for dominance (Russell and Mehrabian 277). After mulling on these averages for a second, I feel these are fairly reflective of general human nature, but let’s not forget these values are dependent on personality and contextual factors too. However, the notion of ‘dominance’ doesn’t feel quite right, and I wonder if a better paradigm could take its place. Personally, the idea of being dominant or submissive is quite similar to the approach/avoidance dichotomy used in areas of biology and psychology. ‘Dominance’ is inherently tied to social situations, and a broader theory of emotion must account for non-social circumstances as well. The compelling argument from the approach/avoidance model centers around hedonism, motivation, and goal acquisition; if a stimulus is pleasurable or beneficial, individuals are motivated to seek it out, while undesirable or dangerous stimuli are avoided in order to protect oneself (Elliot 171). Furthermore, this also works well with the Appraisal Theory of emotion, as it argues that affective states indicate an individual’s needs or goals (Scherer 638). Therefore, I will be using a value range based on approach/avoidance rather than dominance. While human emotions tend to involve much more than a simple judgement about a situation, the Appraisal Theory should suffice for a basic robot. One last modification I would like to make in my version of the 3FT is changing ‘pleasure’ to ‘valence’. This is merely to reflect the style of language used in current psychological literature, where positive values are associated with pleasure and negative values are associated with displeasure. I also like this because robots don’t feel pleasure (yet?) but they are capable of responding based on “good” and “bad” types of stimuli. ‘Arousal’ is perfectly fine as it is, as it reflects how energetic or excited the individual is. For example, being startled results in high arousal due to the relationship between the amygdala, hypothalamus, and other local and distal regions in the body, which typically prepare the individual to run or fight (Pinel 453-454).

To summarize, the three factors I will be using are valence, arousal, and approach/avoidance. As much as I would love to find a term to replace ‘approach/avoidance’, for the sake of a nice acronym, I have yet to find one which encapsulates the true nature of the phenomenon. Anyway, this modified 3FT seems to be a good start for developing emotional states in a simple robot, especially if it only receives a narrow range of sensory input and does not perform any other sophisticated behaviours. While this robot will possess internal states, it won’t be able to reflect upon them nor have any degree of control over them. Heck, I won’t even be using any type of AI algorithms in this version. So if anyone is spooked by a robot who feels, just know that it won’t be able to take over the world.

Works Cited

Elliot, Andrew J. “Approach and avoidance motivation and achievement goals.” Educational psychologist 34.3 (1999): 169-189.

Pinel, John PJ. Biopsychology. Boston, MA: Pearson, 2011.

Russell, James A., and Albert Mehrabian. “Evidence for a three-factor theory of emotions.” Journal of research in Personality 11.3 (1977): 273-294.

Scherer, Klaus R. “Appraisal theory.” Handbook of cognition and emotion (1999): 637-663.