Category: Philosophy

Horty’s Defaults with Priorities for Artificial Moral Agents

Can we build robots that act morally? Wallach and Allen’s book titled Moral Machines investigates a variety of approaches for creating artificial moral agents (AMAs) capable of making appropriate ethical decisions, one of which I find somewhat interesting. On page 34, they briefly mention “deontic logic” which is a version of modal logic that uses concepts of obligation and permission rather than necessity and possibility. This formal system is able derive conclusions about how one ought to act given certain conditions; for example, if one is permitted to perform some act α, then it follows that they are under no obligation to not do, or avoid doing, that act α. Problems arise, however, when agents are faced with conflicting obligations (McNamara). For example, if Bill sets an appointment for noon he is obligated to arrive at the appropriate time, and if Bill’s child were to suddenly have a medical emergency ten minutes prior, in that moment he would be faced with conflicting obligations. Though the right course of action may be fairly obvious in this case, the problem itself still requires some consideration before a decision can be reached. One way to approach this dilemma is to create a framework which is capable of overriding specific commitments if warranted by the situation. As such, John Horty’s Defaults with Priorities may be useful for developing AMAs as it enables the agent to adjust its behaviour based on contextual information.

Roughly speaking, a default rule can be considered similarly to a logical implication, where some antecedent A leads to some consequent B if A obtains. A fairly straightforward example of a default rule may suggest that if a robot detects an obstacle, it must retreat and reorient itself in an effort to avoid the obstruction. There may be cases, however, where this action is not ideal, suggesting the robot needs a way to dynamically switch behaviours based on the type of object it runs into. Horty’s approach suggests that by adding a defeating set with conflicting rules, the default implication can be essentially cancelled out and new conclusions can be derived about a scenario S (373). Unfortunately, the example Horty uses to demonstrate this move stipulates that Tweety bird is a penguin, and it seems the reason for this is merely to show how adding rules leads to the nullification of the default implication. I will attempt to capture the essence of Horty’s awkward example by replacing ‘Tweety’ with ‘Pingu’ as it saves the reader cognitive energy. Let’s suppose then that we can program a robot to conclude that, by default, birds fly (B→F). If the robot also knew that penguins are birds which do not fly (P→B ˄ P→¬F), it would be able to determine that Pingu is a bird that does not fly based on the defeating set. According to Horty, this process can be considered similarly to acts of justification where individuals provide reasons for their beliefs, an idea I thought would be pertinent for AMAs. Operational constraints aside, systems could provide log files detailing the rules and information used when making decisions surrounding some situation S. Moreover, rather than hard-coding rules and information, machine learning may be able to provide the algorithm with the inferences it needs to respond appropriately to environmental stimuli. Using simpler examples than categories of birds and their attributes, it seems feasible that we could test this approach to determine whether it may be useful for building AMAs one day.

Now, I have a feeling that this approach is neither special nor unique within logic and computer science more generally, but for some reason the thought of framing robot actions from the perspective of deontic logic seems like it might be useful somehow. Maybe it’s due to the way deontic terminology is applied to modal logic, acting like an interface between moral theory and computer code. I just found the connection to be thought-provoking, and after reading Horty’s paper, began wondering whether approaches like these may be useful for developing systems that are capable of justifying their actions by listing the reasons used within the decision-making process.

Works Cited

Horty, John. “Defaults with priorities.” Journal of Philosophical Logic 36.4 (2007): 367-413.

McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/logic-deontic/>.

Rescuing Qualia

In Quining Qualia, Dennett states “conscious experience has no properties that are special in any of the ways qualia have been supposed to be special” where qualia are considered “special properties, in some hard-to-define way.” His appeals to intuition aim to defend these ideas, however, the examples he provides may fail to convince the reader as objections can be drawn based on an understanding of nervous system functioning and through examining human behaviour. Here, I’m interested in providing an explanation for qualia which does not rely on some intrinsic property of the mind, but a product of culture which influences, and is influenced by, individual humans and their subjective experiences.

To be facetious for a moment, if qualia did not exist, how could one explain why it is that humans feel compelled to spend energy, time, and money on creating, sharing, and experiencing art? Dennett might appeal to the nature of subjective experiences or perhaps to our motivation for seeking pleasure, however there is much more to subjective experiences than one’s feelings or mental representations evoked by some stimulus. Knowledge surrounding a particular stimulus may shape the way it feels or appears from a first-person perspective; for example, mistaking a benign object for a threat of some kind. A coat and hat hanging on a wall hook inside a dark room may be mistaken for a person, perhaps causing one to feel threatened or startled by the apparent intruder, only to discover the truth after turning on the lights. The subjective experience prompted by the sight of the coat and hat is different than if the illusion had indeed been an unexpected guest, primarily due to the relief one is likely to feel at discovering the reality of the situation. In the case of experiencing art, subjective experiences may change over time or with repeated exposure, but our minds are also influenced by the minds of others. The ability to communicate our feelings to others introduces additional perspectives surrounding a particular stimuli, potentially altering one’s own perception and subsequent experiences. These shared ideas or experiences are then represented through cultural artifacts, practices, or beliefs, and aim to depict associations between sensations and perceptions. In this way, qualia are a features of the natural world insofar as they are a result of evolution and human intelligence, becoming “real” as they shape the ways individuals experience and interact with various stimuli.

Not all subjective experiences become qualia though, as some perceptions are more difficult to articulate than others. How to articulate one’s visual experiences of red? It may remind you of something, but it doesn’t necessarily feel like much to merely look at a red object. I can infer that you probably see the colour red like I do when I consider your behaviour around colourful objects. If someone were to indicate their inability to distinguish colours in the same way that I do, I might perform a quick test to verify the experiential discrepancy. Regardless of individual perception however, there is still “something it is like” to see the colour red as most of us do and are able to create representations appealing to this visual quality. Articulating the nature of ‘red’ on its own is rather tough because its qualities aren’t a composite of other visual qualities per say, at least not in the way that ‘orange’ is. From this perspective, qualia emerge through the act of communicating our experiences to others and through identifying the various phenomenological aspects they contain. Qualia feel real to humans because we use them to engage with artistic practices, almost like Dawkins’ memes but saturated in visceral associations to various sensations and perceptions.

If qualia aren’t real, then why does a collection of piano chords remind Debussy and other listeners of clouds? Language enables us to describe our subjective experiences using similes, where one environmental feature reminds us of something else. These associations are likely to follow certain regularities given the laws and constraints of our universe and our physiology, resulting in a similarities between subjective and shared experiences. I doubt any listener will associate Debussy’s pieces with the eruption of Krakatoa, but it seems reasonable to assume some individuals may think of water rather than the sky when listening to Nuages. Thus, it could be suggested that stimuli may evoke a potential set of qualia that humans may refer to when considering their own subjective experiences. Exactly which qualia are included and excluded is roughly determined by how a stimulus affects individuals as a result of their physiological functioning.

Qualia are products of human culture, not biology. The evolution of primates along with their tendency to socialize and enjoy participating in shared activities gave rise to a shared experiences and various ways to depict or describe them. Human cultures create classifications, distinctions, and ontological categories as way to explain natural phenomena and to share knowledge. This collective idea on how our subjective experiences appear to others facilitates bonding as humans learn they are able to relate to the private experiences of others.

Works Cited

Dennett, Daniel C. “Quining qualia.” Consciousness in modern science. Oxford University Press, 1988.

Coin Toss in an Alternate Universe

I came across this reddit post a couple years ago and thought it was quite funny. I can see Randall Munroe of xkcd comics drawing up a really good depiction of this imaginary phenomenon too.

“According to the multi-world theory, there is a universe where every flipped coin has landed on heads, completely by chance. Imagine rooms full of machines, just flipping coins with scientists baffled as to why it happens”

According to OP in the comments of the reddit post, this world “would have identical physics, [where] this just happens by chance” and “physics aren’t different in this universe, the incident with coins only landing on heads is pure probability, not a law.” I like to imagine that there would be individuals dedicating their entire research careers to this phenomenon, maybe pulling out their hair as no solid evidence is able to suggest why this happens.

If you, dear reader, felt inspired to fulfill my dream of depicting this scenario in an illustration of this scene, I would excitedly add it to the bottom of this post with full credit to you! Wilfred is tired and would like to retire; in this universe he studied the coin toss phenomenon in his free time.

Works Cited

Philosophy of Humour

When learning about theatre back in high school, my drama teacher mentioned comedy arises from two basic principles:
1. It’s funny because it’s not me
2. It’s funny because it’s true

This has probably been said at one point, but I would like to offer a third principle for consideration:
3. It’s funny because it’s me

Why is this different than the second principle? While there may be some overlap, we often think of ourselves as separate from typical functions which determine truth values. Sure, we are able to run through a list of propositions about ourselves and can evaluate them like any other, but there is something more at play here.

Sometimes our feelings hint at things we aren’t ready to confront. Are you able to look yourself in the mirror and say “it is true that I am ___?” Maybe for certain characteristics this is easy, but others may be more difficult to admit. Our laughter, however, suggests we have understood some property about the world, and may be able to relate it to other things, perhaps to ourselves and others, in ways that are less explicit or unarticulated. We may feel amused for several reasons, one of which may include a certain level of meta-analysis. Perhaps deep down we are aware of one character trait we are not proud of but are able to recognize in a moment of leisure. This openness to information may allow ourselves to acknowledge aspects of our life or personality which we typically tend to hide or fix. Humour, especially reflexive humour, which turns the examination process back to oneself, can be therapeutic insofar as it allows us to understand ourselves without feeling the pressure to do anything about it. The first step to change is the recognition that something exists or must be better understood, and in this way, humour cracks the door to look at aspects of ourselves we wish to turn away from. The pleasure which accompanies laughter and humour allows us to relax and see through feelings of embarrassment or defensiveness.

Internet memes provide us with a way to laugh at ourselves and share our vulnerabilities with others. They serve as a reminder that we are human with troubles, flaws, and fears, but they also remind us that we are not alone. It’s easy to get wrapped up in our work, goals, and expectations as we compare ourselves with others and their accomplishments. As much as these aspects of life are important to some degree, we must always remember that the image others present to us is just a segment of their reality. Humour, especially when shared with others, reminds us to breathe; life is more than a to-do list of tasks.

There is a rich body of philosophical literature on humour that I have not yet had the pleasure of reading, but one day I will. As much as I would like to work on adding more to my Philosophy of Memes page, it’s a slow process because I should be focusing on school work! Until then, these considerations will be relatively uninformed and personal, and I look forward to rereading and laughing at my ramblings in 20 years from now.

Why Science Needs Philosophy

My peers within the department often joke about life after university, considering the whole world seems to scoff at those interested in pursuing arts and humanities (A&H) degrees. This opinion piece by Laplace, however, is an important reminder of the value of our discipline, regardless of how much money we end up making in the future. As institutional funding is reallocated to support students pursuing more profitable degrees like computer science and engineering, A&H departments are likely to suffer, unable to hire new faculty and limiting course selection for example. Unless philosophers can market their skills to assist with projects from a variety of sectors, I don’t see how society will continue to support our endeavours, perspectives and concerns. Although notions of “anti-elitism” seem to continue to grow in the United States, perhaps Canada will challenge my pessimistic attitudes on this subject and find innovative ways to support their A&H graduates, but we will see. This suggests philosophers may need to do their own advocacy demonstrating the financial value of creativity and scepticism, especially within business, science, and technology. Consider this entry as my early attempts at convincing you, dear reader, that philosophy is much more than writing about central figures such as Kant, Aristotle, or Frege.

Although Laplane discusses many important points throughout, the end of the article is quite interesting as it suggests ways to foster the relationship between science and philosophy. Now, I’m not quite sure who said this to me, but they presented the idea that philosophy and science are able to discuss the same topic in different ways. While science may prefer ‘what’ questions, philosophy tends to ask ‘why’ and perhaps even ‘how’ concepts, principles, or processes emerge. Though this generalization may oversimplify the relationship between the two, I merely wanted to point out their approximate differences. Laplane herself states “…we see philosophy and science as located on a continuum.” (3950) which suggests both an overlap and a distinction in the questions each discipline asks. It is important to remember the common ground, in addition to the diversity in perspectives, between science and philosophy as we consider new ways to unite these two fields of inquiry.

While I agree with all six recommendations on page 3951, the fourth and fifth stood out to me as the most important especially when it comes to developing this program in the future. The marriage of science and philosophy can only be as good as its thinkers, where education serves a central role for this relationship to be harmonious and fruitful. From primary school to post secondary, it will become increasingly important to teach both arts and sciences of various types to foster the integration of the two. I say ‘arts’ rather than ‘philosophy’ because developing a love for the arts may inspire individuals in ways philosophy is unable. Artistic expression, regardless of medium, allows one to improve their sense of self, and when combined with educational goals, is likely to facilitate personal and professional growth more effectively than either alone. Whether it is sculpting, poetry, or dance, artistic expression provides mechanisms for new approaches within the sciences as one remains in touch with their creative side. Although it might be difficult to understand how theatre may inspire work in civil engineering, the human brain is quite powerful in its abilities to “fill in the blanks” and synthesize concepts, if the opportunity arises. Most exciting of all is how access to information via the internet and online relationships can further assist individuals in their efforts.

Returning to philosophy though, Laplane makes an important point about why philosophical inquiry is so appropriate for science. On page 3950 after the excerpt mentioned above, she states:

“Philosophy and science share the tools of logic, conceptual analysis, and rigorous argumentation. Yet philosophers can operate these tools with degrees of thoroughness, freedom, and theoretical abstraction that practicing researchers often cannot afford in their daily activities.”

It is exactly this freedom which inspired me to move away from studying psychology to studying philosophy of mind. Of course, too much of a good thing can lead one astray, which is why empirical evidence and the methodologies which produce it must never be overlooked by philosophers. The ability to defer to experts is a powerful bidirectional tool which carries so much potential for the future, and maybe one day those interested in A&H subjects will find their niche within capitalistic economies.

Works Cited

Laplane, Lucie, et al. “Opinion: Why science needs philosophy.” Proceedings of the National Academy of Sciences 116.10 (2019): 3948-3952.

Epistemic Responsibility Today

Section 6 of Miller and Record’s Justified Belief in a Digital Age provides suggestions for responsible belief formation given the role and influence algorithms possess in today’s society. The notions they present, however, are vague and appear to be shortsighted. They suggest “subjects can use existing competencies for gaining information from traditional media such as newspapers to supplement internet-filtered information and therefore at least partly satisfy the responsibility to determine whether it is biased or incomplete” (130), except the nature of ‘traditional media’ (TM) has shifted. Since the widespread adoption of social media platforms and online news streaming, TM has seen an increase in competition as small and independent news websites are also shared between users. Importantly, expectations for endless novel content has pressured TM to keep up by increasingly producing editorials, commentary, and speculation. Pundits receive as much airtime as journalists due to the nature of consumer demand, subsequently influencing belief formation. The notion of political bias in TM is also a large concern, where journalistic integrity and credibility ranges drastically between companies. Additionally, TM is more likely to be subsumed under an umbrella corporation with an agenda of its own, whether political, financial, or religious. Deference to TM has always been associated with epistemic risks, and reasons to be sceptical of stories and information are growing as technology modifies our consumption habits.

Further down on page 130, it is recommended one explore outside their personalized feed by investigating others’ posting history: “Instead he can casually visit their Facebook profiles and see whether they have posted an interesting story that the automatically generated news feed missed.”. While this does improve chances of being exposed to diverse content, it is most effective when one reads the feeds of contrasting personalities. Close friends and family members may hold similar attitudes, values, or perspectives which do not adequately challenge one’s suspicions or beliefs. Opposing views, however, may not be justified or well-formed, and ‘opposing’ is open for interpretation. On page 131 the authors state: “… suggests, internet sites, such as political blogs, may refer their readers to alternative views, for example, by linking to opposing sites, out of a commitment to pluralism.” If this program were to be followed, it would suggest religious individuals with dogmatic beliefs are epistemically irresponsible. This may be an unexciting verdict to a philosopher, but it is difficult to determine whether this normative approach to belief formation is suitable for all humans.

Epistemic justification is complicated in the digital age, and it is unclear how much research is required to fulfill one’s epistemic responsibilities. If one stumbles across a scientific claim, it seems reasonable that one ought to determine whether the news headline matches the outcome of the study. Considering the replication crisis has further complicated this process, how much scientific scrutiny is required at this point? If a reader has an understanding of scientific methodology and access to the article, is it irresponsible if one does not examine the methods section? As ideal as epistemic responsibility seems, it might be unattainable due to the nature of the internet and human emotion. Our ability to access such a wealth of knowledge, even when curtailed by algorithms, generates an infinite regress of duties and uncertainty, a fact unlikely to sit well with the average voter.

Works Cited

Miller, Boaz, and Isaac Record. “Justified belief in a digital age: On the epistemic implications of secret Internet technologies.” Episteme 10.2 (2013): 117-134.

AI and the Responsibility Gap

This week we are talking about the responsibility gap that arises from deep learning systems. We read Mind the gap: responsible robotics and the problem of responsibility by David Gunkel along with Andreas Matthias’ article The responsibility gap: Ascribing responsibility for the actions of learning automata.

It seems the mixture of excitement and fear surrounding the rise of autonomous agents may be the result of challenges to our intuitions on the distinction between objects and subjects. This new philosophical realm can be analyzed from a theoretical level, involving ontological and epistemological questions, but these issues can also be examined through a practical lens as well. Considering there may be a substantial amount of debate on the ontological status of various robots and AIs, it might be helpful to consider issues on morality and responsibility as separate to the theoretical questions, at least for now. The reason for this differentiation is to remain focused on protecting users and consumers as new applications of deep learning continue to modify our ontological foundations and daily life. Although legislative details will depend on the answers to theoretical questions to some degree, there may be existing approaches to determining responsibility that can be altered and adopted. Just as research and development firms are responsible for the outcomes of their products and testing procedures (Gunkel 12), AI companies too will likely shoulder the responsibility for unintended and unpredictable side-effects of their endeavours. The degree to which the organization can accurately determine the responsible individual(s) or components will be less straightforward than it may have been historically, but this is due to the complexity of the tools we are currently developing. We are no longer mere labourers using tools for improved efficiency (Gunkel 2); humans are generating technologies which are on the verge of possessing capacities for subjectivity. Even today, the relationship between a DCNN and its creators seems to have more in common with a child-parent relationship than an object-subject relationship. This implies companies are responsible for their products even when they misbehave, as the debacle surrounding Tay.ai demonstrates (Gunkel 5). It won’t be long, however, before we outgrow these concepts and our laws and regulations are challenged yet again. In spite of this, it is not in our best interest to wait until theoretical questions are answered before drafting policies aimed at protecting the public.

Works Cited

Gunkel, David J. “Mind the gap: responsible robotics and the problem of responsibility.” Ethics and Information Technology (2017): 1-14.

Is Opacity a Fundamental Property of Complex Systems?

While operational opacity generated by machine learning algorithms presents a wide range of problems for ethics and computer science (Burrell 10), one type in particular may be unavoidable due to the nature of complex processes. The physical underpinnings of functional systems may be difficult to understand because of the way data is stored and transmitted. Just as patterns of neural activity seem conceptually distant from first-person accounts of subjective experiences, the missing explanation for why or how a DCNN arrives at a particular decision may actually be a feature of the system rather than a bug. Systems capable of storing or processing large amounts of data may only be capable of doing so because of the way nested relationships are embedded in the structure. Furthermore, many of the human behaviours or capacities researchers are trying to understand and copy are both complex and emergent, making them difficult to fully trace back to the physical level of implementation. When we do, it often looks strange and quite chaotic. For example, molecular genetics suggests various combinations of nucleotides give rise to different types of cells and proteins, each with highly specialized and synergistic functions. Additionally, complex phenotypes like disease dispositions are typically the result of many interacting genotypic factors in conjunction with the presence of certain environmental variables. If it turns out to be the case that a degree of opacity is a necessary component of convoluted functionality, we may need to rethink our expectations of how ethics can inform the future of AI development.

Works Cited

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016).

Addiction by Design: Candy Crush et al.

For class this week, we read the first four chapters of Natasha Schull’s book Addition by Design. I think the goal was to consider the similarities and differences between slot machines and gaming applications on handheld devices.

While the two addictions are comparable despite their differences in gameplay format, apps like Candy Crush have found profitable solutions to their unique problems. Developers expect players to “leave their seats” as cellphone use generally orbits around other aspects of daily life. While “time on device” (58) is surely an important part of app design, creating incentives for users to return are also significant. Though this may be accomplished in a number of ways, a common strategy is to generate frequent notifications to both remind and seduce users back to their flow state (49). Overall, the approach may seem less inviting than sounds and lights but its ability to display explicit directions may be effective. Text has the ability to specify rewards if the user opens the app right then and there. A pay structure involving varying wait times may also push users to pay for the ability to return to “the zone” (2). This may take the form of watching an advertisement or being disallowed to play for intervals from an hour to a day, sufficiently frustrating users to pay to continue playing. Similarly to embedding ATMs in slot machines (72), app stores with saved credit card information allow developers to seamlessly lead users to the ‘purchase’ button, quickly increasing revenue. Financial transactions thinly disguised as a part of the game offer a new way to siphon money from vulnerable individuals, especially parents of children with access to connected devices. Additionally, gaming apps are typically weakly associated with physical money like bills and coins, unlike slot machines from mid 20th century (62), perhaps making it easier for consumers to pay without drawing their attention to the movement of money. This brief analysis suggests the nature of gambling is evolving by modifying existing modes of persuasion and adapting to new technological environments.

One large concern, however, arises from where this money goes; while governmental agencies oversee regulations (91) and collect revenue (5) to fund programs and projects, private companies simply collect capital. This carries severe implications for individuals, communities and economies as this alternative stream of income dries up. Therefore, it could be suggested that state and provincial legislators should consider addressing this issue sooner than later.

Works Cited

Schüll, Natasha Dow. Addiction by design: Machine gambling in Las Vegas. Princeton University Press, 2014.

Algorithmic Transparency and Social Power

This term I’m taking the course Science and Ethics, and this week we read Langdon Winner’s 1980 article “Do Artifacts have Politics?” along with a paper from 2016 published by Brent Daniel Mittelstadt and colleagues titled “The ethics of algorithms: Mapping the debate.” We are encouraged to do weekly responses, and considering the concerning nature of what these articles are discussing, thought it should be presented here. There is definitely a lot that could be expanded upon, which I might consider doing at a later time.

Overall, the two articles suggested risks of discriminatory outcomes are an aspect of technological advancements, especially when power imbalances are present or inherent. The paper The ethics of algorithms: Mapping the debate focused particularly on algorithmic design and its current lack of transparency (Mittelstadt 6). The authors mention how this is an epistemic concern, as developers are unable to determine how a decision is reached, which leads to normative problems. Algorithmic outcomes potentially generate discriminatory practices which may generalize and treat groups of people erroneously (Mittelstadt 5). Thus, given the elusive epistemic nature of current algorithmic design, individuals throughout the entire organization can truthfully claim ignorance of their own business practices. Some may take advantage of this fact. Today, corporations that manage to successfully integrate their software into the daily life of many millions of users have little incentive to change, due to shareholder desires for financial growth. Until the system which implicitly suggests companies can simply pay a fee, in the form of legal settlements outside of court, to act unethically, this problem is likely to continue to manifest. This indeed does not inspire confidence for the future of AI as we hand over our personal information to companies and governments (Mittelstadt 6).

Langdon Winner’s on whether artifacts have politics provides a compelling argument for the inherently political nature of our technological objects. While this paper may have been published in 1980, its wisdom and relevance can be readily applied to contemporary contexts. Internet memes even pick up on this parallel; one example poses as a message from Microsoft stating those who program open-source software are communists. While roles of leadership are required for many projects or organizations (Winner 130), inherently political technologies have the hierarchy of social functioning as part of their conceptual foundations, according to Winner (133). The point the author aims to stress surrounds technological effects which impede social functioning (Winner 131), a direction we have yet to move away from considering the events leading up to and following the 2016 American presidential election. If we don’t strive for better epistemic and normative transparency, we will be met with authoritarian outcomes. As neural networks continue to creep into various sectors of society, like law, healthcare, and education, ensuring the protection of individual rights remains at risk.

Works Cited

Mittelstadt, Brent Daniel, et al. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3.2 (2016): 1-21.

Winner, Langdon. “Do artifacts have politics?.” Daedalus 109.1 (1980): 121-36.