Category: Technology

iCub and Qualia?

After a few months of working with Dr. Haikonen on my thesis, I’ve come to realize that a previous post I made about iCub’s phenomenal experiences is incorrect and therefore needs an update. Before I dive into that, however, it’s important for me to state that we ought to be looking at philosophy like programming: bugs are going to arise as people continue to work with new ideas. I love debugging though, so the thought of constantly having to go back to correct myself isn’t all that daunting. It’s about the journey, not the destination, as my partner likes to say.

I stated that “technically, iCub already has phenomenal consciousness and its own type of qualia” but given what Haikonen states in the latest edition of his book, this is not correct. Qualia consist of sensory information generated from physical neurons interacting with elements of the environment, and because iCub relies on sensors which create digital representations of physical properties, these aren’t truly phenomenal experiences. In biological creatures, sensory information is self-explanatory in that they require no further interpretation (Haikonen 7); heat generating sensations of pain indicates the presence of a stimulus to be avoided, as demonstrated by unconscious reflexes. The fact that ‘heat’ does not require further interpretation allows one to mitigate its effects on living cells rather quickly, perhaps avoiding serious damage like a burn altogether. While it might look like iCub feels pain, it’s actually a simile generated by computer code that happens to mimic the actions of animals and humans. Without a human stipulating how heat → flinching, iCub would not respond as such because its brain controls its body, rather than the other way around.

As I stated in the previous post, Sartre outlines how being-for-itself arises from a being-in-itself through recursive analysis, provided the neural hardware can support this cognitive action. Because iCub does not originate as a being-in-itself like living organisms, but as a fancy computer, the ontological foundation for phenomenal experiences or qualia is absent. iCub doesn’t care about anything, even itself, as it has been designed to produce behaviours for some end goal, like stacking boxes or replying to human speech. In biology, the end goal is continued survival and reproduction, where behaviours aim to further this outcome through reflexes and sophisticated cognitive abilities. The brain-body relationship in iCub is backwards, as the brain is designed by humans for the purposes of governing the robot body, rather than the body creating signals that the nervous system uses for protecting itself as an autonomous agent. In this way, organisms “care about” what happens to them, unlike iCub, as ripping off its arm doesn’t generate a reaction unless it were to be programmed that way.

In sum, the signals passed around iCub’s “nervous system” exist as binary representations of real-world properties as conceptualized by human programmers. This degree of abstraction disqualifies these “experiences” from being labelled as ‘qualia’ given that they do not adhere to principles identified within biology. The only way an AI can be phenomenally conscious is when it has the means to generate its own internal representations based on an analogous transduction process as seen in biological agents (Haikonen 10–11).

Works Cited

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), https://doi.org/10.1142/11404.

Magic in Culture

Now is a good time to inject a little magic into every day life by examining and revelling in humanity’s vast history of cultural knowledge and practices. I encourage you to consider your capacity for creativity as a source of magic, where your ability to generate something more from something less is a special kind of wizardry. Moreover, our creations take on a life of their own as others are free to reference and expand upon these contributions. This is especially true today as the internet allows us to find like-minded individuals and communities which appreciate specific skills and the fruits of their labour.

In fact, it could be argued from an anthropological perspective, the internet is as magical as it gets. Although the term itself is used as a noun, the thing it references is more like a vague verb than a solid concept or object. We talk about a thing we don’t often think deeply about, especially due to its physical opacity and degree of technicality. Holding a hard-drive in your hand does not clarify this ambiguity and any resulting confusion, as there is nothing to suggest in these materials that an entire virtual world exists within. Without a screen and a means to display its contents, the information inside is rendered unknowable to the human mind. The amount of human knowledge, skill, and technological progress required to sustain life today is evidence of our power as creators, however, what seems to be missing is a sense of awe that ought to accompany the witnessing of supernatural events.

The causal powers of seemingly magical effects, like electricity for example, can more or less be explained or accounted for by applications of dynamics systems theory, as the interactions of environmental conditions over time is required for the emergence of new properties or products. These emergent products are generated by restructuring lower-level entities or conditions but are not reducible to them, nor are predictable from the lower level (Kim 20-21). Electricity is generated by transforming physical forces and materials into energy, emerging from the interaction of environmental variables like heat and air pressure for example. Alternatively, consider a simple loaf of bread as created by the interaction of flour, a leavening agent like yeast, time, and heat. The ingredients for the bread, like the flour, yeast, sugar, and salt, must be added in a specific order at a specific time in order for the final product to truly become ‘bread’.

Emergence can also be identified in game theory, as cooperation generates a non-zero sum outcome where individuals gain more by working together than if they were working alone (Curry 29). Human economies are founded on this principle of cooperation, as trading goods and services with others theoretically improves the lives individuals working to honour the agreement. From this perspective, it turns out that bronies have identified a fundamental principle of life: friendship is magick because cooperation generates something more from something less. Just as individuals are free to expand upon or reshape the ideas and contributions of others, and groups of individuals are able to combine their expertise to build something new altogether, like the internet. Not only can we establish conceptual connections between past, present, and future, we can connect with each other to expand our understanding of some portion of human culture.

Works Cited

Curry, Oliver Scott. ‘Morality as Cooperation: A Problem-Centred Approach’. The Evolution of Morality, Springer, 2016, pp. 27–51.

Kim, Jaegwon. ‘Making Sense of Emergence’. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 95, no. 1/2, 1999, pp. 3–36.

Implicit Argument for Qualia

Steven Harnad provides an embodied version of the Turing Test (TT) in Other Bodies, Other Minds by using a robot instead of a computer, calling it the Total Turing Test (TTT). He states that to be truly indistinguishable from a human, artificial minds will require the ability to express embodied behaviours in addition to linguistic capacities (Harnad 44). While the TT implicitly assumes language exists independently from the rest of human behaviour (Harnad 45), the TTT avoids problems arising from this assumption by including a behavioural component to the test (Harnad 46). This is due to our tendency to infer other humans have minds despite the fact individuals do not have direct evidence for this belief (Harnad 45). This assumption can be extended to robots as well, where embodied artificial agents which act sufficiently human will be treated as if it had a mind (Harnad 46). Robots which pass the TTT can be said to understand symbols because these symbols have been grounded in non-symbolic structures or bottom-up sensory projections (Harnad 50–51). Therefore, embodiment seems to be necessary for social agents as they will require an understanding of the world and its contents to appear humanlike.

These sensory projections are also known as percepts or qualia (Haikonen 225), and are therefore required for learning language. While Harnad’s intention may have been to avoid discussing metaphysical properties of the mind, for the sake of discussing the TTT, his argument ends up providing support for the ontological structures involved in phenomenal consciousness. Although I didn’t mention it above, he uses this argument to refute Searle’s concerns about the Chinese Room, and the reason he is successful is due to the fact he is identifying an ontological necessity. Robots which pass the TTT will have their own minds because the behaviours which persuade people to believe this is the case are founded on the same processes that produce this capacity in humans.

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Harnad, Stevan. ‘Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem’. Minds and Machines, vol. 1, no. 1, 1991, pp. 43–54.

Subjects as Embodied Minds

Last year I wrote a paper on robot consciousness to submit to a conference, only to realize that there is a better approach to establishing this argument than the one I took. In Sartrean Phenomenology for Humanoid Robots, I attempted to draw a connection between Sartre’s description of self-awareness and how this can be applied to robotics, and while at the time I was more interested in this higher-order understanding of the self, it might be a better idea to start with an argument for phenomenal consciousness. I realized that technically, iCub already has phenomenal consciousness and its own type of qualia, a notion I should develop more before moving on to discuss how we can create intelligent, self-aware robots.

What I originally wanted to convey was how lower levels of consciousness act as a foundation from which higher-order consciousness emerges as the agent grows up in the world, where access consciousness is the result of childhood development. Because this paper is a bit unfocused, I only really talked about this idea in one paragraph when it should be its own paper:

“Sartre’s discussion of the body as being-for-itself is also consistent with the scientific literature on perception and action, and has inspired others to investigate enactivism and embodied cognition in greater detail (Thompson 408; Wider 385; Wilson and Foglia; Zilio 80). This broad philosophical perspective suggests cognition is dependent on features of the agent’s physical body, playing a role in the processing performed by the brain (Wilson and Foglia). Since our awareness tends to surpass our perceptual contents toward acting in response to them (Zilio 80), the body becomes our centre of reference from which the world is experienced (Zilio 79). When Sartre talks about the pen or hammer as an extension of his body, his perspective reflects the way our faculties are able to focus on other aspects of the environment or ourselves as we engage with tools for some purpose. I’d like to suggest that this ability to look past the immediate self can be achieved because we, as subjects, have matured through the sensorimotor stage and have learned to control and coordinate aspects of our bodies. The skills we develop as a result of this sensorimotor learning enables the brain to redirect cognitive resources away from controlling the body to focus primarily on performing mental operations. When we write with a pen, we don’t often think about how to shape each letter or spell each word because we learned how to do this when we were children, allowing us to focus on what we want to say rather than how to communicate it using our body. Thus, the significance of the body for perception and action is further reinforced by evidence from developmental approaches emerging from Piaget’s foundational research.”

Applying this developmental process to iCub isn’t really the exciting idea here, and although robot self-consciousness is cool and all, it’s a bit more unsettling, to me at least, to think about the fact that existing robots of this type technically already feel. They just lack the awareness to know that they are feeling, however, in order to recognize a cup, there is something it is like to see that cup. Do robots think? Not yet, but just as dogs have qualia, so does iCub and Haikonen’s XCR-1 (Law et al. 273; Haikonen 232–33). What are we to make of this?

by Vincenzo Fiorecropped

Works Cited

Haikonen, Pentti O. ‘Qualia and Conscious Machines’. International Journal of Machine Consciousness, World Scientific Publishing Company, Apr. 2012. world, www.worldscientific.com, https://doi.org/10.1142/S1793843009000207.

Law, James, et al. ‘Infants and ICubs: Applying Developmental Psychology to Robot Shaping’. Procedia Computer Science, vol. 7, Jan. 2011, pp. 272–74. ScienceDirect, https://doi.org/10.1016/j.procs.2011.09.034.

Thompson, Evan. ‘Sensorimotor Subjectivity and the Enactive Approach to Experience’. Phenomenology and the Cognitive Sciences, vol. 4, no. 4, Dec. 2005, pp. 407–27. Springer Link, https://doi.org/10.1007/s11097-005-9003-x.

Wider, Kathleen. ‘Sartre, Enactivism, and the Bodily Nature of Pre-Reflective Consciousness’. Pre-Reflective Consciousness, Routledge, 2015.

Wilson, Robert A., and Lucia Foglia. ‘Embodied Cognition’. The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Spring 2017, Metaphysics Research Lab, Stanford University, 2017. Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition/.

Zilio, Federico. ‘The Body Surpassed Towards the World and Perception Surpassed Towards Action: A Comparison Between Enactivism and Sartre’s Phenomenology’. Journal of French and Francophone Philosophy, vol. 28, no. 1, 2020, pp. 73–99. PhilPapers, https://doi.org/10.5195/jffp.2020.927.

Information Warfare

It seems we are in the midst of a new world war, except now it aims to lurk in the forms of soft power, coercion, and psychological manipulation. The Cold War essentially hibernated for a few years until Putin became powerful enough to relaunch it online by using Cambridge Analytica and Facebook, targeting major western superpowers like the United States and the United Kingdom. We are witnessing the dismantling of NATO as nations erode from the inside through societal infighting. War games are not mapped out on land and sea but in the minds of groups residing within enemy nations (Meerloo 99). By destabilizing social cohesion within a particular country or region, the fighting becomes self-sustaining and obscured.

Information is key for psychological operations; as sensing living beings, information is what allows us to make good decisions which allow us to achieve our goals and keep living as best as possible. Since information has the capacity to control the behaviours of individuals, power can be generated through the production and control of information. Today, a number of key scientific organizations and individuals are drunk with power as they are in positions to control what should be considered true or false. For the sake of resource management, and likely a dash of plain ol’ human greed, the pragmatic pressures of the world have shaped what was once a methodology into a machine that provides people with purported facts about reality. As a result, we are now battling an epistemic dragon driven by collecting more gold to sit on.

This suggests that the things we believe are extremely valuable to others around the world, in addition to being one of the most valuable things you possess. The information and perspective you can provide to others is valuable, either to the society you belong to or to those interested in seeing your society crumble. The adage about ideas “living rent free in your head” seems appropriate because cultural memes are causally effective; they shape the way you think and act and such, introduces a potential psychological harm. Critical thinking and introspection are important because they are processes which counteract the influence of other people, because by forcing individuals to dig deeper from their subjective point of view, one ends up consolidating and pruning their beliefs.

Collateral damage has shifted from bodies to minds and communities will continue to be torn apart until we develop a system for individuals to combat these external influences. Socrates has shown us that philosophical inquiry tends to irritate people, and the fact that mere scientific scepticism today is being met with ad hominems suggests we are on the right track. Remember, the goal is discourse rather than concrete answers, and an important component involves considering new and conflicting ideas. Be wary of what incentivizes other people but do not judge them for it. Compassion will be the most challenging part of this entire endeavour, but I believe in you.

Works Cited

Meerloo, Joost A. M. The Rape of the Mind:  The Psychology of Thought Control, Menticide, and Brainwashing. The World Publishing Company, 1956.

Stafford, Zach M. at extrafabulouscomics.com

Artificial Consciousness

With fewer courses this term, I’ve had a lot more time to work on the topic I’d like to pursue for my doctoral research, and as a result, have found the authors I need to start writing papers. This is very exciting because existing literature suggests we have a decent answer to the Chalmer’s Hard Problem, and from a nonreductive functionalist perspective, can fill in the metaphysical picture required for producing an account of phenomenal experiences (Feinberg and Mallatt; Solms; Tsou). This means we are justified in considering artificial consciousness as a serious possibility, enabling us to start discussions on what we should be doing about it. I’m currently working on papers that address the hard problem and qualia, arguing that information is the puzzle piece we are looking for.

Individuals have suggested that consciousness is virtual, similarly to computer software running on hardware (Bruiger; Haikonen; Lehar; Orpwood) Using this idea, we can posit that social robots can become conscious like humans, as the functional architectures of both rely on incoming information to construct an understanding of things, people, and itself. My research contributes to this perspective by stressing the significance of social interactions for developing conscious machines. Much of the engineering and philosophical literature focuses on internal architectures for cognition, but what seems to be missing is just how crucial other people are for the development of conscious minds. Preprocessed information in the form of knowledge is crucial for creating minds, as seen in developmental psychology literature. Children are taught labels for things they interact with, and by linguistically engaging with others about the world, they become able to express themselves as subjects with needs and desires. Therefore, meaning is generated for individuals by learning from others, contributing to the formation of conscious subjects.

Moreover, if we can discuss concepts from phenomenology in terms of the interplay of physiological functioning and information-processing, it seems reasonable to suggest that we have resolved the problems plaguing consciousness studies. Acting as an interface between first-person perspectives and a third-person perspective, information accounts for the contents, origins, and attributes of various conscious states. Though an exact mapping between disciplines may not be possible, some general ideas or common notions might be sufficiently explained by drawing connections between the two perspectives.

Works Cited

Bruiger, Dan. How the Brain Makes Up the Mind: A Heuristic Approach to the Hard Problem of Consciousness. June 2018.

Chalmers, David. ‘Facing Up to the Problem of Consciousness’. Journal of Consciousness Studies, vol. 2, no. 3, Mar. 1995, pp. 200–19. ResearchGate, doi:10.1093/acprof:oso/9780195311105.003.0001.

Feinberg, Todd E., and Jon Mallatt. ‘Phenomenal Consciousness and Emergence: Eliminating the Explanatory Gap’. Frontiers in Psychology, vol. 11, Frontiers, 2020. Frontiers, doi:10.3389/fpsyg.2020.01041.

Haikonen, Pentti O. Consciousness and Robot Sentience. 2nd ed., vol. 04, WORLD SCIENTIFIC, 2019. DOI.org (Crossref), doi:10.1142/11404.

Lehar, Steven. The World in Your Head: A Gestalt View of the Mechanism of Conscious Experience. Lawrence Erlbaum, 2003.

Orpwood, Roger. ‘Information and the Origin of Qualia’. Frontiers in Systems Neuroscience, vol. 11, Frontiers, 2017, p. 22.

Solms, Mark. ‘A Neuropsychoanalytical Approach to the Hard Problem of Consciousness’. Journal of Integrative Neuroscience, vol. 13, no. 02, Imperial College Press, June 2014, pp. 173–85. worldscientific.com (Atypon), doi:10.1142/S0219635214400032.

Tsou, Jonathan Y. ‘Origins of the Qualitative Aspects of Consciousness: Evolutionary Answers to Chalmers’ Hard Problem’. Origins of Mind, edited by Liz Swan, Springer Netherlands, 2013, pp. 259–69. Springer Link, doi:10.1007/978-94-007-5419-5_13.

Horty’s Defaults with Priorities for Artificial Moral Agents

Can we build robots that act morally? Wallach and Allen’s book titled Moral Machines investigates a variety of approaches for creating artificial moral agents (AMAs) capable of making appropriate ethical decisions, one of which I find somewhat interesting. On page 34, they briefly mention “deontic logic” which is a version of modal logic that uses concepts of obligation and permission rather than necessity and possibility. This formal system is able derive conclusions about how one ought to act given certain conditions; for example, if one is permitted to perform some act α, then it follows that they are under no obligation to not do, or avoid doing, that actα. Problems arise, however, when agents are faced with conflicting obligations (McNamara). For example, if Bill sets an appointment for noon he is obligated to arrive at the appropriate time, and if Bill’s child were to suddenly have a medical emergency ten minutes prior, in that moment he would be faced with conflicting obligations. Though the right course of action may be fairly obvious in this case, the problem itself still requires some consideration before a decision can be reached. One way to approach this dilemma is to create a framework which is capable of overriding specific commitments if warranted by the situation. As such, John Horty’s Defaults with Priorities may be useful for developing AMAs as it enables the agent to adjust its behaviour based on contextual information.

Roughly speaking, a default rule can be considered similarly to a logical implication, where some antecedent A leads to some consequent B if A obtains. A fairly straightforward example of a default rule may suggest that if a robot detects an obstacle, it must retreat and reorient itself in an effort to avoid the obstruction. There may be cases, however, where this action is not ideal, suggesting the robot needs a way to dynamically switch behaviours based on the type of object it runs into. Horty’s approach suggests that by adding a defeating set with conflicting rules, the default implication can be essentially cancelled out and new conclusions can be derived about a scenario S (373). Unfortunately, the example Horty uses to demonstrate this move stipulates that Tweety bird a penguin, and it seems the reason for this is merely to show how adding rules leads to the nullification of the default implication. I will attempt to capture the essence of Horty’s awkward example by replacing ‘Tweety’ with ‘Pingu’ as it saves the reader cognitive energy. Let’s suppose then that we can program a robot to conclude that, by default, birds fly (B→F). If the robot also knew that penguins are birds which do not fly (P→B ˄ P→¬F), it would be able to determine that Pingu is a bird that does not fly based on the defeating set. According to Horty, this process can be considered similarly to acts of justification where individuals provide reasons for their beliefs, an idea I thought would be pertinent for AMAs. Operational constraints aside, systems could provide log files detailing the rules and information used when making decisions surrounding some situation S. Moreover, rather than hard-coding rules and information, machine learning may be able to provide the algorithm with the inferences it needs to respond appropriately to environmental stimuli. Using simpler examples than categories of birds and their attributes, it seems feasible that we could test this approach to determine whether it may be useful for building AMAs one day.

Now, I have a feeling that this approach is neither special nor unique within logic and computer science more generally, but for some reason the thought of framing robot actions from the perspective of deontic logic seems like it might be useful somehow. Maybe it’s due to the way deontic terminology is applied to modal logic, acting like an interface between moral theory and computer code. I just found the connection to be thought-provoking, and after reading Horty’s paper, began wondering whether approaches like these may be useful for developing systems that are capable of justifying their actions by listing the reasons used within the decision-making process.

Works Cited

Horty, John. “Defaults with priorities.” Journal of Philosophical Logic 36.4 (2007): 367-413.

McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/logic-deontic/>.

Democratic Privacy Reform

If you aren’t familiar with the issues surrounding personal data collection by corporate tech giants and online privacy, I recommend you flip through Amnesty International’s publication Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights. I would also suggest reading A Contextual Approach to Privacy Online by Helen Nissenbaum if you are interested in further discussions on the future of data collection. Actually, even if you are familiar with these issues, read them anyway because they are very interesting and you may learn something new.

Both articles offer interesting suggestions for governments and corporations to ensure online privacy is protected, and it is clear top-down approaches are necessary for upholding human rights. Substantial effort will be required for full corporate compliance however, as both law and computer systems need updating to better respect user data. While these measures ensure ethical responsibilities are directed to the appropriate parties, a complementary bottom-up approach may be required as well. There is great potential for change if citizens were to engage with this issue and help one another better understand the importance of privacy. A democratic strategy for protecting online human rights is possible, but it seems quite demanding considering this work is ideally performed voluntarily. Additionally, I fear putting this approach into practice is an uphill epistemic battle; many individuals aren’t overly bothered by surveillance. Since the issue is complex and technological, it is difficult to understand resulting in little concern due to the lack of perceived threat. Thus, there will always be a market for the Internet of Things. Moreover, advertising revenue provides little incentive for corporations to respect user data, unless a vocal group of protesters is able to substantially threaten their public image. Enacting regulatory laws may be effective for addressing human rights issues but the conflict between governments and companies is likely to continue under the status quo. Consumers who enjoy these platforms and products face a moral dilemma: is this acceptable if society and democracy is negatively impacted? Can ethical considerations regarding economic externalities help answer this question? If not, are there other analogous ethical theories which may be appropriate for questions regarding the responsibilities of citizens? If activists and ethicists are interested in organizing information and materials for empowering voters and consumers, these challenges will need practical and digestible answers.

Works Cited

Amnesty International. Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights. Research article, amnesty.org/en/documents/pol30/1404/2019/en/, 2019.

Nissenbaum, Helen. “A contextual approach to privacy online.” Daedalus 140.4 (2011): 32-48.

AI and the Responsibility Gap

This week we are talking about the responsibility gap that arises from deep learning systems. We read Mind the gap: responsible robotics and the problem of responsibility by David Gunkel along with Andreas Matthias’ article The responsibility gap: Ascribing responsibility for the actions of learning automata.

It seems the mixture of excitement and fear surrounding the rise of autonomous agents may be the result of challenges to our intuitions on the distinction between objects and subjects. This new philosophical realm can be analyzed from a theoretical level, involving ontological and epistemological questions, but these issues can also be examined through a practical lens as well. Considering there may be a substantial amount of debate on the ontological status of various robots and AIs, it might be helpful to consider issues on morality and responsibility as separate to the theoretical questions, at least for now. The reason for this differentiation is to remain focused on protecting users and consumers as new applications of deep learning continue to modify our ontological foundations and daily life. Although legislative details will depend on the answers to theoretical questions to some degree, there may be existing approaches to determining responsibility that can be altered and adopted. Just as research and development firms are responsible for the outcomes of their products and testing procedures (Gunkel 12), AI companies too will likely shoulder the responsibility for unintended and unpredictable side-effects of their endeavours. The degree to which the organization can accurately determine the responsible individual(s) or components will be less straightforward than it may have been historically, but this is due to the complexity of the tools we are currently developing. We are no longer mere labourers using tools for improved efficiency (Gunkel 2); humans are generating technologies which are on the verge of possessing capacities for subjectivity. Even today, the relationship between a DCNN and its creators seems to have more in common with a child-parent relationship than an object-subject relationship. This implies companies are responsible for their products even when they misbehave, as the debacle surrounding Tay.ai demonstrates (Gunkel 5). It won’t be long, however, before we outgrow these concepts and our laws and regulations are challenged yet again. In spite of this, it is not in our best interest to wait until theoretical questions are answered before drafting policies aimed at protecting the public.

Works Cited

Gunkel, David J. “Mind the gap: responsible robotics and the problem of responsibility.” Ethics and Information Technology (2017): 1-14.

Is Opacity a Fundamental Property of Complex Systems?

While operational opacity generated by machine learning algorithms presents a wide range of problems for ethics and computer science (Burrell 10), one type in particular may be unavoidable due to the nature of complex processes. The physical underpinnings of functional systems may be difficult to understand because of the way data is stored and transmitted. Just as patterns of neural activity seem conceptually distant from first-person accounts of subjective experiences, the missing explanation for why or how a DCNN arrives at a particular decision may actually be a feature of the system rather than a bug. Systems capable of storing or processing large amounts of data may only be capable of doing so because of the way nested relationships are embedded in the structure. Furthermore, many of the human behaviours or capacities researchers are trying to understand and copy are both complex and emergent, making them difficult to fully trace back to the physical level of implementation. When we do, it often looks strange and quite chaotic. For example, molecular genetics suggests various combinations of nucleotides give rise to different types of cells and proteins, each with highly specialized and synergistic functions. Additionally, complex phenotypes like disease dispositions are typically the result of many interacting genotypic factors in conjunction with the presence of certain environmental variables. If it turns out to be the case that a degree of opacity is a necessary component of convoluted functionality, we may need to rethink our expectations of how ethics can inform the future of AI development.

Works Cited

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016).