Category: Life

Epistemic Responsibility Today

Section 6 of Miller and Record’s Justified Belief in a Digital Age provides suggestions for responsible belief formation given the role and influence algorithms possess in today’s society. The notions they present, however, are vague and appear to be shortsighted. They suggest “subjects can use existing competencies for gaining information from traditional media such as newspapers to supplement internet-filtered information and therefore at least partly satisfy the responsibility to determine whether it is biased or incomplete” (130), except the nature of ‘traditional media’ (TM) has shifted. Since the widespread adoption of social media platforms and online news streaming, TM has seen an increase in competition as small and independent news websites are also shared between users. Importantly, expectations for endless novel content has pressured TM to keep up by increasingly producing editorials, commentary, and speculation. Pundits receive as much airtime as journalists due to the nature of consumer demand, subsequently influencing belief formation. The notion of political bias in TM is also a large concern, where journalistic integrity and credibility ranges drastically between companies. Additionally, TM is more likely to be subsumed under an umbrella corporation with an agenda of its own, whether political, financial, or religious. Deference to TM has always been associated with epistemic risks, and reasons to be sceptical of stories and information are growing as technology modifies our consumption habits.

Further down on page 130, it is recommended one explore outside their personalized feed by investigating others’ posting history: “Instead he can casually visit their Facebook profiles and see whether they have posted an interesting story that the automatically generated news feed missed.”. While this does improve chances of being exposed to diverse content, it is most effective when one reads the feeds of contrasting personalities. Close friends and family members may hold similar attitudes, values, or perspectives which do not adequately challenge one’s suspicions or beliefs. Opposing views, however, may not be justified or well-formed, and ‘opposing’ is open for interpretation. On page 131 the authors state: “… suggests, internet sites, such as political blogs, may refer their readers to alternative views, for example, by linking to opposing sites, out of a commitment to pluralism.” If this program were to be followed, it would suggest religious individuals with dogmatic beliefs are epistemically irresponsible. This may be an unexciting verdict to a philosopher, but it is difficult to determine whether this normative approach to belief formation is suitable for all humans.

Epistemic justification is complicated in the digital age, and it is unclear how much research is required to fulfill one’s epistemic responsibilities. If one stumbles across a scientific claim, it seems reasonable that one ought to determine whether the news headline matches the outcome of the study. Considering the replication crisis has further complicated this process, how much scientific scrutiny is required at this point? If a reader has an understanding of scientific methodology and access to the article, is it irresponsible if one does not examine the methods section? As ideal as epistemic responsibility seems, it might be unattainable due to the nature of the internet and human emotion. Our ability to access such a wealth of knowledge, even when curtailed by algorithms, generates an infinite regress of duties and uncertainty, a fact unlikely to sit well with the average voter.

Works Cited

Miller, Boaz, and Isaac Record. “Justified belief in a digital age: On the epistemic implications of secret Internet technologies.” Episteme 10.2 (2013): 117-134.

Addiction by Design: Candy Crush et al.

For class this week, we read the first four chapters of Natasha Schull’s book Addition by Design. I think the goal was to consider the similarities and differences between slot machines and gaming applications on handheld devices.

While the two addictions are comparable despite their differences in gameplay format, apps like Candy Crush have found profitable solutions to their unique problems. Developers expect players to “leave their seats” as cellphone use generally orbits around other aspects of daily life. While “time on device” (58) is surely an important part of app design, creating incentives for users to return are also significant. Though this may be accomplished in a number of ways, a common strategy is to generate frequent notifications to both remind and seduce users back to their flow state (49). Overall, the approach may seem less inviting than sounds and lights but its ability to display explicit directions may be effective. Text has the ability to specify rewards if the user opens the app right then and there. A pay structure involving varying wait times may also push users to pay for the ability to return to “the zone” (2). This may take the form of watching an advertisement or being disallowed to play for intervals from an hour to a day, sufficiently frustrating users to pay to continue playing. Similarly to embedding ATMs in slot machines (72), app stores with saved credit card information allow developers to seamlessly lead users to the ‘purchase’ button, quickly increasing revenue. Financial transactions thinly disguised as a part of the game offer a new way to siphon money from vulnerable individuals, especially parents of children with access to connected devices. Additionally, gaming apps are typically weakly associated with physical money like bills and coins, unlike slot machines from mid 20th century (62), perhaps making it easier for consumers to pay without drawing their attention to the movement of money. This brief analysis suggests the nature of gambling is evolving by modifying existing modes of persuasion and adapting to new technological environments.

One large concern, however, arises from where this money goes; while governmental agencies oversee regulations (91) and collect revenue (5) to fund programs and projects, private companies simply collect capital. This carries severe implications for individuals, communities and economies as this alternative stream of income dries up. Therefore, it could be suggested that state and provincial legislators should consider addressing this issue sooner than later.

Works Cited

Schüll, Natasha Dow. Addiction by design: Machine gambling in Las Vegas. Princeton University Press, 2014.

Algorithmic Transparency and Social Power

This term I’m taking the course Science and Ethics, and this week we read Langdon Winner’s 1980 article “Do Artifacts have Politics?” along with a paper from 2016 published by Brent Daniel Mittelstadt and colleagues titled “The ethics of algorithms: Mapping the debate.” We are encouraged to do weekly responses, and considering the concerning nature of what these articles are discussing, thought it should be presented here. There is definitely a lot that could be expanded upon, which I might consider doing at a later time.

Overall, the two articles suggested risks of discriminatory outcomes are an aspect of technological advancements, especially when power imbalances are present or inherent. The paper The ethics of algorithms: Mapping the debate focused particularly on algorithmic design and its current lack of transparency (Mittelstadt 6). The authors mention how this is an epistemic concern, as developers are unable to determine how a decision is reached, which leads to normative problems. Algorithmic outcomes potentially generate discriminatory practices which may generalize and treat groups of people erroneously (Mittelstadt 5). Thus, given the elusive epistemic nature of current algorithmic design, individuals throughout the entire organization can truthfully claim ignorance of their own business practices. Some may take advantage of this fact. Today, corporations that manage to successfully integrate their software into the daily life of many millions of users have little incentive to change, due to shareholder desires for financial growth. Until the system which implicitly suggests companies can simply pay a fee, in the form of legal settlements outside of court, to act unethically, this problem is likely to continue to manifest. This indeed does not inspire confidence for the future of AI as we hand over our personal information to companies and governments (Mittelstadt 6).

Langdon Winner’s on whether artifacts have politics provides a compelling argument for the inherently political nature of our technological objects. While this paper may have been published in 1980, its wisdom and relevance can be readily applied to contemporary contexts. Internet memes even pick up on this parallel; one example poses as a message from Microsoft stating those who program open-source software are communists. While roles of leadership are required for many projects or organizations (Winner 130), inherently political technologies have the hierarchy of social functioning as part of their conceptual foundations, according to Winner (133). The point the author aims to stress surrounds technological effects which impede social functioning (Winner 131), a direction we have yet to move away from considering the events leading up to and following the 2016 American presidential election. If we don’t strive for better epistemic and normative transparency, we will be met with authoritarian outcomes. As neural networks continue to creep into various sectors of society, like law, healthcare, and education, ensuring the protection of individual rights remains at risk.

Works Cited

Mittelstadt, Brent Daniel, et al. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3.2 (2016): 1-21.

Winner, Langdon. “Do artifacts have politics?.” Daedalus 109.1 (1980): 121-36.

Update: Phil of Bio

The University of Guelph has a Philosophy of Biology course and it was everything I was hoping it would be. Jointly taught by Dr. Stefan Linquist and Dr. Ryan Gregory, our focus on arguments surrounding epigenetics led many to agree there isn’t really a lot of new information. The book Extended heredity: a new understanding of inheritance and evolution turned out to be hilariously contradictory, as many of the concepts it presented can be easily explained by existing biological theories. I had an opportunity to receive feedback on ideas I have about Chalmers’ “bridging principles” and how biological processes produce subjective feelings. As I suspected, an incredible amount of work needs to be done to get these ideas together, but I have a direction now. The project is being placed on the back burner though and so is my attempt to work on consciousness at school. I’m not too worried, I’ll get to it later.

For now, I’m going to work on an argument for an upcoming need to reconsider our conception of robots and our relationships with them, particularly as they begin to resemble subjects rather than objects. There is a growing demand for robotic solutions within the realm of healthcare, suggesting certain functionality must be incorporated to achieve particular outcomes. Information processing related to social cues and contexts such as emotional expression will be important to uphold patient dignity and foster well-being. Investigating Kismet‘s architecture suggests cognition and emotion operate in tandem to orient agents toward goals and methods for obtaining them. The result of this functional setup, however, is it requires humans to treat Kismet like a biological organism, implying a weak sense of subjectivity. I’m also interested in considering objections to the subjectivity argument and reasons why our relationships with robots will remain relatively unchanged.

My original post on the philosophy of biology cited the entry from the Stanford Encyclopedia of Philosophy which is authored Paul Griffiths. I learned earlier this term that Dr. Linquist studied under Dr. Griffiths, a fact that should not be surprising but is still quite exciting.

I’m looking forward to working on this project and the outcome of the feedback and learning, but I am going to get knocked down many levels over the next six months or so. I mean, that’s why I am here.

Works Cited

Bonduriansky, Russell, and Troy Day. Extended heredity: a new understanding of inheritance and evolution. Princeton University Press, 2018.

First Impressions

Hello, and thanks for checking out my website!

It’s been quite a while since I last owned mollygraham.net and it’s pretty strange to think about all that has happened between then and now. Back in 2010, I was in Vancouver earnestly trying to develop my skills as a seamstress and clothing designer, and this site used to have images of my first attempts at creating my own brand. Oh how the times have changed…

This post will be a little more personal than the others, just because it’s my first and I want to set the stage for both who I am and the ideas I intend to pursue. For the most part, future posts will be centered around philosophy and ideas about artificial intelligence and human cognition.

However, please allow me to briefly introduce myself.

Fashion design (actually the arts in general) is ruthlessly cut-throat, and at some point I realized it would require more dedication than I was willing to provide. It was not for a lack of energy did I relegate this skill back to a hobby; I just had a feeling there might be something better suited to my abilities that I could pursue instead. Actually, it was the need for a website which lead me to learn about HTML and CSS, and eventually it introduced me to programming in general. Since I tend to be somewhat pragmatic, and knew this field was secure and paid well, I decided to take courses on software development at BCIT. I was extremely lucky and managed to land two awesome developer jobs after a year of part-time courses. I worked as a junior programmer for almost a year and a half, and learned a lot about computer science and technology, as well as aspects of working at medium to large-scale companies. However, Vancouver’s dreary weather was starting to gnaw at my psyche, and I had always been interested in living in Toronto, so off I went. It was while I was pursuing programming jobs here that I decided to apply to the University of Toronto, but this time for psychology.

While working at Western Union Business Solutions, I was introduced to the idea of artificial intelligence. We actually had an office book club (I hope it still lives to this day) which would meet for an hour every Friday to discuss a book relevant to software development. After we finished one on object-oriented programming, it came time for a vote and Hofstader’s brilliant Godel, Escher, Bach was chosen. Not only did it blow my mind, but it lead the conversation to discuss AI. Immediately the notion captivated me and I had to know more. This had to be the single coolest idea I had ever heard about. Moreover, one of my colleagues (… Ari? I have been struggling for many years to remember the name of the person who said this) mentioned that consciousness might be a recursive function. BOOM. Since I already had a decent understanding of psychology due to taking it in high school, I could kind of see how this was possible. I had to pursue this.

So when I enrolled at U of T, I figured I’d major in psychology and minor in computer science. The faculty of computer science wasn’t exactly the same practical, hands-on approach that the polytechnic offered, and feeling disillusioned, I switched to philosophy to round out my theoretical understanding of both the mind and computer science. It turned out that I actually love philosophy, and had been quietly philosophizing for most of my life without even realizing it. I just love reading new perspectives and ideas, so I felt right at home writing essays about abstract topics.

It was in third year when the idea hit me. I envisioned a rough outline of an account for how the phenomenon of consciousness came into being, ontologically speaking. Furthermore, these ideas could be isomorphically implemented into computers or machines. After years of feeling directionless and unsure about what I wanted to pursue, I finally “felt my calling.” Carl Jung was spot on: “People don’t have ideas, ideas have people.”

Now, it’s my last term in my fourth year and I am taking a seminar in philosophy of mind which will give me the chance to write about these ideas, and better yet, get feedback about them. Eventually I will go on to do a graduate degree, but I haven’t done any research about it yet. After I graduate, I will take a year to work and write, as well as devote time into my other hobbies, like sewing and practicing the cello.

I apologize for the rambling autobiography, but I wanted to give my readers a sense of where I’m coming from. I also want to document the thoughts and feelings I have had over the last several years, perhaps as a way to appreciate the growth and changes I’ve been through. This is just the beginning though; the engine has been fueled and I will do whatever it takes to build a conscious machine.

Anyway, the rest of my entries will not be about my life, and if I do sprinkle in my personal stories from time to time, I will keep them brief and modest with the aim to relate them to wider contexts.

Thanks for reading!