Virtuous Circles

Prior to the advent of artificial intelligence, cybernetics introduced a theory for the development of autonomous agents through functional circularity. These systems use their own outputs as inputs to generate feedback loops1 to create a self-regulating process which is able to maintain autonomy and stability.2 A simple example is the thermostat which uses sensors to measure air temperature.3 The reading is compared to the set threshold, and heat generated if the value is below the desired temperature. Once the heat has increased to meet the threshold, the thermostat detects the change and shuts off heat production. If the temperature drops below the threshold, the thermostat repeats the process automatically.

Since ancient Greece, circular causality was considered to be problematic given its tendency to produce logical paradoxes.4 One example is “This statement is false” because if the entire statement is true, it contradicts itself. If the statement is evaluated as false, this again produces a contradiction because the statement is indeed true, given that it declares itself to be false. In this instance of self-reference, the contradiction cannot be resolved.

As mentioned in a previous post, some instances of self-reference are not paradoxical, particularly with respect to living organisms. A cat can learn to pounce by noting the difference between its desired end-state, to leap on top of a toy for example, and its actual end-state, which falls short and misses it. The reason this doesn’t produce a paradox is because the cat, as a natural system, contains far more complexity than the statement above. Visual and tactile cues can be reinterpreted by its relatively sophisticated brain to adjust its muscle movement on the next jump, reducing the error until it lands successfully.

The cat contains parts which act as wholes, while the statement does not; its parts are simple and concrete. The nervous system can be considered as a whole, acting as a functional unit within a larger system, the animal as a self-contained individual. The statement, however, generates a paradox because it hits a dead-end so-to-speak, as there isn’t any additional functionality to resolve the act of self-reference. Essentially, it contains two parts: the statement and a binary value.

The argument I am proposing is that formal systems, even complex ones, are two-dimensional while natural systems, even simple ones, are three-dimensional. The two dimensions include structure and information; in the paradox example above, the structure refers to the statement and the information is the binary value. Natural systems, however, include structure, information, and energy. This third dimension, energy, is added as a consequence of the type of structure involved. I’m still working on this part and I might change my mind on this.

I can see why esoteric wisdom is presented in metaphor; it’s difficult to articulate abstract metaphysical ideas. Following suit, let’s think about a circle in two dimensions versus three dimensions.

A line segment which loops back upon itself creates a circle. The metaphor is a snake which eats its own tail and dies of starvation; the paradoxical statement “dies” as a result of the contradiction it generates. It cannot be true and not true, breaking itself in the process.

Ouroboros photo by Leo Reynolds

Fun fact: you can save a snake from starving to death in this situation by holding up a vessel of strong rubbing alcohol to its nose. This triggers its gag reflex which frees its tail.

In three dimensions, however, we gain a new axis, one which allows for upward movement. The circle becomes a spiral as it gains height in this new dimension. Though we are back to where we started, something is added and can thus look down to see the vertical distance traveled. Alternatively, the spiral can descend, coming full circle but on a lower plane. The former is considered a virtuous circle while the latter is a vicious circle.

Vizcaya Museum and Gardens in Miami, Florida
Photo by Mary Mark Ockerbloom

These examples may abstractly illustrate the point, but how are we to explain this using normal physics? The answer seems to reside in levels created by nested systems, creating irreducible parts to be leveraged in cases of self-reference.

For example, the nervous system is an integrated whole5 which responds to the organism’s own actions. Selecting a particular pathway provides a linear input-output process, for example, from sensory mechanisms in the skin, through the nerves in the spinal cord, into the primary sensory cortex in the brain, to the motor cortex in the brain, and back down again to the hand. The system can be represented as a simpler interaction but it cannot be reduced to a simpler system. As a unit, the nervous system contains additional functionality which the the parts do not possess. Leveraging other aspects of the nervous system, say visual information, provides a “way out” for any situation which may cause a paradox of sorts.

At least this is what I’m thinking for now. A complex whole comprised of subsystems can account for self-reference. Can a formal system, even comprised of a number of complex subsystems, account for self-reference? Perhaps in some cases, if the system is “looking inward” from a broader scope than the thing it is referring to. In cases where self-reference fails, it might be due to a need to “move outward” and reference something beyond the current scope. It might also be due to a logical paradox from within, as seen in Gödel’s Incompleteness Theorem, or perhaps due to the fact that formal systems cannot account for semantics. I’m not exactly sure at this point. I am going to keep working on this but for now, here’s an attempt at clarification.

neuralblender.com

Works Cited

1 Francis Heylighen and Cliff Joslyn, “Cybernetics and Second-Order Cybernetics,” in Encyclopedia of Physical Science and Technology (Third Edition), ed. Robert A. Meyers (New York: Academic Press, 2003), 160, https://doi.org/10.1016/B0-12-227410-5/00161-7.

2 John Johnston, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI (The MIT Press, 2008), 26, https://doi.org/10.7551/mitpress/9780262101264.001.0001.

3 Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine, Second (Cambridge, MA: The MIT Press, 1948), 131, https://doi.org/10.7551/mitpress/11810.001.0001.

4 Thomas Fischer and Christiane M. Herr, “An Introduction to Design Cybernetics,” in Design Cybernetics: Navigating the New, ed. Thomas Fischer and Christiane M. Herr (Cham, Switzerland: Springer International Publishing, 2019), 2, https://doi.org/10.1007/978-3-030-18557-2_1.

5 Wiener, Cybernetics, 13.