AI and the Responsibility Gap

This week we are talking about the responsibility gap that arises from deep learning systems. We read Mind the gap: responsible robotics and the problem of responsibility by David Gunkel along with Andreas Matthias’ article The responsibility gap: Ascribing responsibility for the actions of learning automata.

It seems the mixture of excitement and fear surrounding the rise of autonomous agents may be the result of challenges to our intuitions on the distinction between objects and subjects. This new philosophical realm can be analyzed from a theoretical level, involving ontological and epistemological questions, but these issues can also be examined through a practical lens as well. Considering there may be a substantial amount of debate on the ontological status of various robots and AIs, it might be helpful to consider issues on morality and responsibility as separate to the theoretical questions, at least for now. The reason for this differentiation is to remain focused on protecting users and consumers as new applications of deep learning continue to modify our ontological foundations and daily life. Although legislative details will depend on the answers to theoretical questions to some degree, there may be existing approaches to determining responsibility that can be altered and adopted. Just as research and development firms are responsible for the outcomes of their products and testing procedures (Gunkel 12), AI companies too will likely shoulder the responsibility for unintended and unpredictable side-effects of their endeavours. The degree to which the organization can accurately determine the responsible individual(s) or components will be less straightforward than it may have been historically, but this is due to the complexity of the tools we are currently developing. We are no longer mere labourers using tools for improved efficiency (Gunkel 2); humans are generating technologies which are on the verge of possessing capacities for subjectivity. Even today, the relationship between a DCNN and its creators seems to have more in common with a child-parent relationship than an object-subject relationship. This implies companies are responsible for their products even when they misbehave, as the debacle surrounding Tay.ai demonstrates (Gunkel 5). It won’t be long, however, before we outgrow these concepts and our laws and regulations are challenged yet again. In spite of this, it is not in our best interest to wait until theoretical questions are answered before drafting policies aimed at protecting the public.

Works Cited

Gunkel, David J. “Mind the gap: responsible robotics and the problem of responsibility.” Ethics and Information Technology (2017): 1-14.

Is Opacity a Fundamental Property of Complex Systems?

While operational opacity generated by machine learning algorithms presents a wide range of problems for ethics and computer science (Burrell 10), one type in particular may be unavoidable due to the nature of complex processes. The physical underpinnings of functional systems may be difficult to understand because of the way data is stored and transmitted. Just as patterns of neural activity seem conceptually distant from first-person accounts of subjective experiences, the missing explanation for why or how a DCNN arrives at a particular decision may actually be a feature of the system rather than a bug. Systems capable of storing or processing large amounts of data may only be capable of doing so because of the way nested relationships are embedded in the structure. Furthermore, many of the human behaviours or capacities researchers are trying to understand and copy are both complex and emergent, making them difficult to fully trace back to the physical level of implementation. When we do, it often looks strange and quite chaotic. For example, molecular genetics suggests various combinations of nucleotides give rise to different types of cells and proteins, each with highly specialized and synergistic functions. Additionally, complex phenotypes like disease dispositions are typically the result of many interacting genotypic factors in conjunction with the presence of certain environmental variables. If it turns out to be the case that a degree of opacity is a necessary component of convoluted functionality, we may need to rethink our expectations of how ethics can inform the future of AI development.

Works Cited

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016).

Addiction by Design: Candy Crush et al.

For class this week, we read the first four chapters of Natasha Schull’s book Addition by Design. I think the goal was to consider the similarities and differences between slot machines and gaming applications on handheld devices.

While the two addictions are comparable despite their differences in gameplay format, apps like Candy Crush have found profitable solutions to their unique problems. Developers expect players to “leave their seats” as cellphone use generally orbits around other aspects of daily life. While “time on device” (58) is surely an important part of app design, creating incentives for users to return are also significant. Though this may be accomplished in a number of ways, a common strategy is to generate frequent notifications to both remind and seduce users back to their flow state (49). Overall, the approach may seem less inviting than sounds and lights but its ability to display explicit directions may be effective. Text has the ability to specify rewards if the user opens the app right then and there. A pay structure involving varying wait times may also push users to pay for the ability to return to “the zone” (2). This may take the form of watching an advertisement or being disallowed to play for intervals from an hour to a day, sufficiently frustrating users to pay to continue playing. Similarly to embedding ATMs in slot machines (72), app stores with saved credit card information allow developers to seamlessly lead users to the ‘purchase’ button, quickly increasing revenue. Financial transactions thinly disguised as a part of the game offer a new way to siphon money from vulnerable individuals, especially parents of children with access to connected devices. Additionally, gaming apps are typically weakly associated with physical money like bills and coins, unlike slot machines from mid 20th century (62), perhaps making it easier for consumers to pay without drawing their attention to the movement of money. This brief analysis suggests the nature of gambling is evolving by modifying existing modes of persuasion and adapting to new technological environments.

One large concern, however, arises from where this money goes; while governmental agencies oversee regulations (91) and collect revenue (5) to fund programs and projects, private companies simply collect capital. This carries severe implications for individuals, communities and economies as this alternative stream of income dries up. Therefore, it could be suggested that state and provincial legislators should consider addressing this issue sooner than later.

Works Cited

Schüll, Natasha Dow. Addiction by design: Machine gambling in Las Vegas. Princeton University Press, 2014.

Algorithmic Transparency and Social Power

This term I’m taking the course Science and Ethics, and this week we read Langdon Winner’s 1980 article “Do Artifacts have Politics?” along with a paper from 2016 published by Brent Daniel Mittelstadt and colleagues titled “The ethics of algorithms: Mapping the debate.” We are encouraged to do weekly responses, and considering the concerning nature of what these articles are discussing, thought it should be presented here. There is definitely a lot that could be expanded upon, which I might consider doing at a later time.

Overall, the two articles suggested risks of discriminatory outcomes are an aspect of technological advancements, especially when power imbalances are present or inherent. The paper The ethics of algorithms: Mapping the debate focused particularly on algorithmic design and its current lack of transparency (Mittelstadt 6). The authors mention how this is an epistemic concern, as developers are unable to determine how a decision is reached, which leads to normative problems. Algorithmic outcomes potentially generate discriminatory practices which may generalize and treat groups of people erroneously (Mittelstadt 5). Thus, given the elusive epistemic nature of current algorithmic design, individuals throughout the entire organization can truthfully claim ignorance of their own business practices. Some may take advantage of this fact. Today, corporations that manage to successfully integrate their software into the daily life of many millions of users have little incentive to change, due to shareholder desires for financial growth. Until the system which implicitly suggests companies can simply pay a fee, in the form of legal settlements outside of court, to act unethically, this problem is likely to continue to manifest. This indeed does not inspire confidence for the future of AI as we hand over our personal information to companies and governments (Mittelstadt 6).

Langdon Winner’s on whether artifacts have politics provides a compelling argument for the inherently political nature of our technological objects. While this paper may have been published in 1980, its wisdom and relevance can be readily applied to contemporary contexts. Internet memes even pick up on this parallel; one example poses as a message from Microsoft stating those who program open-source software are communists. While roles of leadership are required for many projects or organizations (Winner 130), inherently political technologies have the hierarchy of social functioning as part of their conceptual foundations, according to Winner (133). The point the author aims to stress surrounds technological effects which impede social functioning (Winner 131), a direction we have yet to move away from considering the events leading up to and following the 2016 American presidential election. If we don’t strive for better epistemic and normative transparency, we will be met with authoritarian outcomes. As neural networks continue to creep into various sectors of society, like law, healthcare, and education, ensuring the protection of individual rights remains at risk.

Works Cited

Mittelstadt, Brent Daniel, et al. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3.2 (2016): 1-21.

Winner, Langdon. “Do artifacts have politics?.” Daedalus 109.1 (1980): 121-36.

Update: Phil of Bio

The University of Guelph has a Philosophy of Biology course and it was everything I was hoping it would be. Jointly taught by Dr. Stefan Linquist and Dr. Ryan Gregory, our focus on arguments surrounding epigenetics led many to agree there isn’t really a lot of new information. The book Extended heredity: a new understanding of inheritance and evolution turned out to be hilariously contradictory, as many of the concepts it presented can be easily explained by existing biological theories. I had an opportunity to receive feedback on ideas I have about Chalmers’ “bridging principles” and how biological processes produce subjective feelings. As I suspected, an incredible amount of work needs to be done to get these ideas together, but I have a direction now. The project is being placed on the back burner though and so is my attempt to work on consciousness at school. I’m not too worried, I’ll get to it later.

For now, I’m going to work on an argument for an upcoming need to reconsider our conception of robots and our relationships with them, particularly as they begin to resemble subjects rather than objects. There is a growing demand for robotic solutions within the realm of healthcare, suggesting certain functionality must be incorporated to achieve particular outcomes. Information processing related to social cues and contexts such as emotional expression will be important to uphold patient dignity and foster well-being. Investigating Kismet‘s architecture suggests cognition and emotion operate in tandem to orient agents toward goals and methods for obtaining them. The result of this functional setup, however, is it requires humans to treat Kismet like a biological organism, implying a weak sense of subjectivity. I’m also interested in considering objections to the subjectivity argument and reasons why our relationships with robots will remain relatively unchanged.

My original post on the philosophy of biology cited the entry from the Stanford Encyclopedia of Philosophy which is authored Paul Griffiths. I learned earlier this term that Dr. Linquist studied under Dr. Griffiths, a fact that should not be surprising but is still quite exciting.

I’m looking forward to working on this project and the outcome of the feedback and learning, but I am going to get knocked down many levels over the next six months or so. I mean, that’s why I am here.

Works Cited

Bonduriansky, Russell, and Troy Day. Extended heredity: a new understanding of inheritance and evolution. Princeton University Press, 2018.

Programming Emotions

Last summer, I was introduced to the world of hobby robotics and began building an obstacle-avoidance bot as a way to learn the basics. Once classes started last September, all projects were set aside until I graduated, allowing me to focus on school. Now that I have free time, I’ve been thinking about what kind of robot to build next. It will probably still have wheels and an ultrasonic sensor, but I want it to behave based on its internal environment as well as its external environment. Not only will it detect objects in its path, but it will also move about based on its mood or current emotional state. For example, if it were to be afraid of loud noises, it would go to “hide” against a nearby object. This specific functionality would require the robot have a microphone to detect sounds, and is something I have been thinking of adding. Otherwise, the only input the robot has is object-detection, and producing or calculating emotions based on the frequency of things in its path is kind of boring. I have also been interested in operationalizing, codifying, and programming emotions for quite a while now, and this project would be a great place to start.

One helpful theory I came across is the Three-Factor Theory (3FT) developed by Mehrabian and Russell in 1974 (Russell and Mehrabian 274). It describes emotions as ranging through a three-dimensional space consisting of values for pleasure, arousal, and dominance. For example, a state of anger is associated with -.68 for pleasure, +.22 for arousal, and +.10 for dominance (Russell and Mehrabian 277). After mulling on these averages for a second, I feel these are fairly reflective of general human nature, but let’s not forget these values are dependent on personality and contextual factors too. However, the notion of ‘dominance’ doesn’t feel quite right, and I wonder if a better paradigm could take its place. Personally, the idea of being dominant or submissive is quite similar to the approach/avoidance dichotomy used in areas of biology and psychology. ‘Dominance’ is inherently tied to social situations, and a broader theory of emotion must account for non-social circumstances as well. The compelling argument from the approach/avoidance model centers around hedonism, motivation, and goal acquisition; if a stimulus is pleasurable or beneficial, individuals are motivated to seek it out, while undesirable or dangerous stimuli are avoided in order to protect oneself (Elliot 171). Furthermore, this also works well with the Appraisal Theory of emotion, as it argues that affective states indicate an individual’s needs or goals (Scherer 638). Therefore, I will be using a value range based on approach/avoidance rather than dominance. While human emotions tend to involve much more than a simple judgement about a situation, the Appraisal Theory should suffice for a basic robot. One last modification I would like to make in my version of the 3FT is changing ‘pleasure’ to ‘valence’. This is merely to reflect the style of language used in current psychological literature, where positive values are associated with pleasure and negative values are associated with displeasure. I also like this because robots don’t feel pleasure (yet?) but they are capable of responding based on “good” and “bad” types of stimuli. ‘Arousal’ is perfectly fine as it is, as it reflects how energetic or excited the individual is. For example, being startled results in high arousal due to the relationship between the amygdala, hypothalamus, and other local and distal regions in the body, which typically prepare the individual to run or fight (Pinel 453-454).

To summarize, the three factors I will be using are valence, arousal, and approach/avoidance. As much as I would love to find a term to replace ‘approach/avoidance’, for the sake of a nice acronym, I have yet to find one which encapsulates the true nature of the phenomenon. Anyway, this modified 3FT seems to be a good start for developing emotional states in a simple robot, especially if it only receives a narrow range of sensory input and does not perform any other sophisticated behaviours. While this robot will possess internal states, it won’t be able to reflect upon them nor have any degree of control over them. Heck, I won’t even be using any type of AI algorithms in this version. So if anyone is spooked by a robot who feels, just know that it won’t be able to take over the world.

Works Cited

Elliot, Andrew J. “Approach and avoidance motivation and achievement goals.” Educational psychologist 34.3 (1999): 169-189.

Pinel, John PJ. Biopsychology. Boston, MA: Pearson, 2011.

Russell, James A., and Albert Mehrabian. “Evidence for a three-factor theory of emotions.” Journal of research in Personality 11.3 (1977): 273-294.

Scherer, Klaus R. “Appraisal theory.” Handbook of cognition and emotion (1999): 637-663.

Ontic Structural Realism

John Worrall’s paper “Structural realism: The best of both worlds?” mostly outlines the debate occurring within the Philosophy of Science surrounding whether scientific realism or anti-realism best captures our intuitions and intentions regarding the scientific process and its history. Although this topic on its own is a fascinating discussion, it will not the focus of this article. Instead, I’d like to explore Worrall’s reply and its implications when approached from a metaphysical perspective. At the end of his paper, Worrall concludes that combining an aspect of realism with an anti-realist attitude produces a potential solution to the dilemma. Structural realism describes a perspective which can account for the predictive success of science while also explaining how scientific revolutions changed theories and research practices throughout history (Worrall 123). Structural realists believe that when scientific theories undergo conceptual change, their form or structure remains constant while the content of a theory may be modified based on new empirical findings (Worrall 117). He believes this account is able to explain how scientific theories are able to undergo both growth and replacement over time (Worrall 120).

However, structural realism can be further divided into two different categories which support differing views. The epistemic structural realist (ESR) believes all we can learn through scientific inquiry is the accuracy of a theory’s inherent structure, and not the concepts or entities themselves (Ladyman SEP). The more extreme version, ontic structural realism (OSR), states that there are no objects or things, and the universe is only comprised of structures, forms, and relations (Ladyman SEP). Van Fraassen describes this position as “radical structuralism” (van Fraassen 280) and appeals to science’s use of mathematical formulas to serve as motivation for this view (van Fraassen 304). Since physics uses math to describe how the physical world operates, and complex bodies of organic machinery operate based on the rules of physics, objects found in nature can theoretically be explained in mathematical terms. Although our conceptions of these entities may change dramatically over time, their mathematical descriptions and relations tend to expand as new discoveries are incorporated into existing theories (van Fraassen 305).

Although the discussion surrounding structural realism originally aimed to answer problems in the philosophy of science, OSR eventually became its own metaphysical view. This is partly due to discoveries made in physics over the previous century, shaping how we think about the natural world, especially in the subatomic domain (Ladyman SEP). At one point, an atom was thought to be the smallest unit of matter in the universe, with its original Greek word atomos meaning ‘indivisible’ (merriam-webster.com). Today however, we are able to run tests which not only divide atoms, but smash them together in order to inspect the pieces which make up the subatomic particles themselves. Furthermore, as our understanding of quantum physics grows, the less neat-and-tidy the world seems to be packaged.

My goal here is not to convince you to fully adopt the OSR perspective, but to consider it as a tool for drafting instances of artificial general intelligence and artificial consciousness. Personally, I find this approach to understanding reality very interesting and am compelled by the notion of “structures all the way down.” However, some may find this problematic as the relata seem to be missing. What is the ‘stuff’ that the structure is made out of? What exactly is being organized in a structural way? In a nutshell, segments of miniature structures. An example of this is the relationship between chemical bonds and the physical laws which binds them together. Or a Swiss army knife made of small metal parts, where these pieces consist entirely of a combination of metal atoms arranged into shapes. But these atoms themselves are just structures of subatomic particles, consisting of protons, neutrons, and electrons. Moreover, if we continue to zoom in, it turns out that these particles are just comprised of other particles arranged in different relations. Each contemporary understanding of “matter” and “mass” changes based on  technological improvements, assisting in the production and interpretation scientific experiments. It may turn out that OSR becomes a useful perspective for conceptualizing our universe, especially as old assumptions are reworked to include new and possibly contradictory empirical discoveries.

Speaking of perception, if there are no objects, then does everything exist in the mind like Berekely thought? I think the answer to this is a little ‘yes’ and a little ‘no’: the brain creates representations of concepts which are based on learned regularities from interacting within an environment. From an evolutionary perspective, the brain adapted to help promote an individual’s survival by learning to recognize patterns and recall previous events. As neuronal structures developed to support new and more complex functions, the individual’s subjective awareness of their abilities grew as well, eventually producing mental concepts and linguistic labels. For example, neurons in the primary auditory cortex are arranged tonotopically, where particular cells responding to specific frequencies are arranged by pitch from low to high (Romani, Williamson, and Kaufman 1339). In this sense then, a C major scale played on a piano can be viewed as isomorphic to the way these frequencies are physically realized in the brain. Rather than the world or universe containing ‘a C major scale’, from an OSR perspective, reality only contains the laws which govern how sound pressures and vibrations must exist and operate such that a musical scale can be realized in a human brain. From an individual’s perspective however, this type of stimuli is represented as ‘music’ or ‘the sound of a piano’.

An unpublished article by Brian Cantwell Smith called Non-Conceptual World discusses a similar perspective to structural realism, claiming that concepts only exist in the mind (Cantwell Smith 8). One example Cantwell Smith mentions is the aftermath of the volcanic eruption which occurred in ancient Pompeii (see photo below). While he uses this as an example of how object recognition is an intentional process (Cantwell Smith 16), I was reminded of an optical illusion which shares important similarities to depictions of Pompeii. The Dalmation Illusion (see photo below) is just a series of black and white dots, yet somehow the brain manages to pick out the form of a dog. In reality however, there is no dog and the image is deemed an illusion. However, the image of the Pompeii disaster indicates that there really is a person amidst the variety of grey blotches. In both instances, the incoming visual information seems to be quite similar at first, but the stored contextual information associated with the visual stimuli creates a contrast in the conceptual representation of each “object”. I tend to agree with Cantwell Smith when he says “there aren’t any objects out there” (8) because an evolutionary perspective suggests the brain generated object-based concepts, not the universe.

If this is so, what can be said about human consciousness? Natural selection is the process of increasing genetic diversity through reproduction, as well as constraining it through environmental pressures, producing a mechanism for building and refining chemical and physiological structures. It seems likely that psychological structures are also subject to this same type of development, or at least impacted by it. Random genetic mutations may produce functional changes which impact neighbouring structures, requiring other neurons to update their internal and external organization as a result. If this change produces a large enough effect, a small patch of cortex or connected regions may also be impacted, leading the individual to notice alterations in their motor or perceptual abilities. By viewing the brain as a structure of networks and configurations, it can be suggested that consciousness may have emerged as a result of one or more self-organizing structures interacting within both internal and external environments over time.

Therefore, consciousness is an emergent property of the brain, but there may be more to this story, as will be discussed in future articles. Briefly though, consciousness, or some version of intelligent self-awareness, may be a direct result of the self-organizing system which constitutes evolution. Furthermore, could consciousness be an inevitable outcome of an instance of natural selection? Is there a threshold in the variables which makes this outcome necessity to occur, like an action potential in a neuron? I am also looking forward to discussing Dynamical Systems Theory and Information Theory as they relate to these ideas as well.

The reason OSR is important for designing artificial minds is because of its power to generate isomorphic versions of laws from the natural world. If an object or entity can be conceptualized as a series of structures with functional regularities, machine code and mathematical systems may be able to generate models which produce similar behaviours or results. Since neural networks and deep learning have found success in perceptual recognition, perhaps it will be beneficial for taking a developmental approach to artificial consciousness as well.

Edited version:

Works Cited

“Atom.” Merriam-Webster.com. Merriam-Webster, n.d. Web. 26 May 2018.

Blauw, Laura. Bodies in Pompeii. 2008, digital photograph. https://lauraotms.deviantart.com/art/Bodies-in-Pompeii-83587124

Ladyman, James, “Structural Realism”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/structural-realism/>.

Romani, Gian Luca, Samuel J. Williamson, and Lloyd Kaufman. “Tonotopic organization of the human auditory cortex.” Science 216.4552 (1982): 1339-1340.

Van Fraassen, Bas C. “Structure: Its shadow and substance.” The British Journal for the Philosophy of Science 57.2 (2006): 275-307.

Worrall, John. “Structural realism: The best of both worlds?.” Dialectica 43.1‐2 (1989): 99-124.

An Appeal to Philosophy of Biology

Subdivisions within the of philosophy of science have many handy conceptual tools to offer those studying philosophy of mind. For example, the philosophy of biology is able to provide insight on how the theory of evolution contributed to the development of the brain and its functions, why consciousness feels the way it does, and how humans became so intelligent and rational (Griffiths 2017). Questions within biology and other sciences are slowly answered as scientists gather evidence and connect it with other knowledge. A philosopher may ask similar questions (Griffiths 2017, section 8) but these are likely to differ in contexts such as scope or level of abstraction. Appealing to evidence provides good epistemic reason to form a belief (typically and/or ideally) and may provide compelling answers for anyone who feels inclined to follow this style of thinking¹.

This is indeed a bold claim, but I am eager to demonstrate its effectiveness. Consciousness can be explained in slightly metaphysical, somewhat psychological, and mostly biological² terms, and it’s time we check out the evidence. Once a rough sketch of how the mind supervenes on the brain has been sufficiently outlined, we can create tests to answer further or lingering questions. If we work at collecting all sorts of information about the brain, body, and environment, organizing questions and findings in strategic ways, we can create an empirical account for the mind.

Topics to be discussed for an empirical account of consciousness include:

  • Anthropology and human history
  • Biology and its sub-fields
  • Cognitive psychology
  • Culture and social life
  • Developmental psychology; environmental influences, neural plasticity
  • Evolution; development of the nervous system
  • Linguistics; role of language on the brain
  • Neuroscience
  • Philosophy of mind; historical to current
  • Technology; mechanical, information

This is only the beginning however, so I am sure there will be more. I did not include metaphysics and epistemology in that list because they’re kind of implied. If you think there is something I’m missing, either from the list or from something related to the post, feel free to email me.

For those of you who like foreshadowing or hints, check out Ontic Structural Realism, related to philosophy of science.

Notes:

  1. I say this with little or no fervor; there are people who agree and those who do not. Epistemology is beautifully dense and compelling, and I understand there are many sides and critiques.
  2. In a reductionist sense, where biological terms and concepts can be explained via chemistry, which can be explained via physics, etc. Moreover, this list of sources for evidence is not comprehensive.

Works Cited

Griffiths, Paul, “Philosophy of Biology”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2017/entries/biology-philosophy/>.

First Impressions

Hello, and thanks for checking out my website!

It’s been quite a while since I last owned mollygraham.net and it’s pretty strange to think about all that has happened between then and now. Back in 2010, I was in Vancouver earnestly trying to develop my skills as a seamstress and clothing designer, and this site used to have images of my first attempts at creating my own brand. Oh how the times have changed…

This post will be a little more personal than the others, just because it’s my first and I want to set the stage for both who I am and the ideas I intend to pursue. For the most part, future posts will be centered around philosophy and ideas about artificial intelligence and human cognition.

However, please allow me to briefly introduce myself.

Fashion design (actually the arts in general) is ruthlessly cut-throat, and at some point I realized it would require more dedication than I was willing to provide. It was not for a lack of energy did I relegate this skill back to a hobby; I just had a feeling there might be something better suited to my abilities that I could pursue instead. Actually, it was the need for a website which lead me to learn about HTML and CSS, and eventually it introduced me to programming in general. Since I tend to be somewhat pragmatic, and knew this field was secure and paid well, I decided to take courses on software development at BCIT. I was extremely lucky and managed to land two awesome developer jobs after a year of part-time courses. I worked as a junior programmer for almost a year and a half, and learned a lot about computer science and technology, as well as aspects of working at medium to large-scale companies. However, Vancouver’s dreary weather was starting to gnaw at my psyche, and I had always been interested in living in Toronto, so off I went. It was while I was pursuing programming jobs here that I decided to apply to the University of Toronto, but this time for psychology.

While working at Western Union Business Solutions, I was introduced to the idea of artificial intelligence. We actually had an office book club (I hope it still lives to this day) which would meet for an hour every Friday to discuss a book relevant to software development. After we finished one on object-oriented programming, it came time for a vote and Hofstader’s brilliant Godel, Escher, Bach was chosen. Not only did it blow my mind, but it lead the conversation to discuss AI. Immediately the notion captivated me and I had to know more. This had to be the single coolest idea I had ever heard about. Moreover, one of my colleagues (… Ari? I have been struggling for many years to remember the name of the person who said this) mentioned that consciousness might be a recursive function. BOOM. Since I already had a decent understanding of psychology due to taking it in high school, I could kind of see how this was possible. I had to pursue this.

So when I enrolled at U of T, I figured I’d major in psychology and minor in computer science. The faculty of computer science wasn’t exactly the same practical, hands-on approach that the polytechnic offered, and feeling disillusioned, I switched to philosophy to round out my theoretical understanding of both the mind and computer science. It turned out that I actually love philosophy, and had been quietly philosophizing for most of my life without even realizing it. I just love reading new perspectives and ideas, so I felt right at home writing essays about abstract topics.

It was in third year when the idea hit me. I envisioned a rough outline of an account for how the phenomenon of consciousness came into being, ontologically speaking. Furthermore, these ideas could be isomorphically implemented into computers or machines. After years of feeling directionless and unsure about what I wanted to pursue, I finally “felt my calling.” Carl Jung was spot on: “People don’t have ideas, ideas have people.”

Now, it’s my last term in my fourth year and I am taking a seminar in philosophy of mind which will give me the chance to write about these ideas, and better yet, get feedback about them. Eventually I will go on to do a graduate degree, but I haven’t done any research about it yet. After I graduate, I will take a year to work and write, as well as devote time into my other hobbies, like sewing and practicing the cello.

I apologize for the rambling autobiography, but I wanted to give my readers a sense of where I’m coming from. I also want to document the thoughts and feelings I have had over the last several years, perhaps as a way to appreciate the growth and changes I’ve been through. This is just the beginning though; the engine has been fueled and I will do whatever it takes to build a conscious machine.

Anyway, the rest of my entries will not be about my life, and if I do sprinkle in my personal stories from time to time, I will keep them brief and modest with the aim to relate them to wider contexts.

Thanks for reading!