Art+ificiality: Machine Creativity & Its Critics

 

§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:

“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)

A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:

“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)

The author attempts to answer his selfsame query with the following two paragraphs.

“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.

 

As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)

§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).

§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.

Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]

 

“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]

§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]

“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’

 

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’

 

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

 

No.

 

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

 

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)

§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.

“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-­learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.

 

Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.

 

Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?

 

That’s what I claim a machine cannot do. Let’s see why.

 

Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.

 

So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.

 

But this is where it gets complicated.

 

We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.

 

Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.

 

First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-­traditionalism at the heart of the radical modernity emerging in early-­20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)

§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,

“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)

§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.


§.Sources

  1. Allison B. Kaufman et al. (2011) Towards a neurobiology of creativity in nonhuman animals. Journal of Comparative Psychology.
  2. Brenden M. Lake et al. (2016) Building machines that learn and think like people. Cornell University. [v.3]
  3. Oshin Vartanian et al. (2013) Neuroscience of Creativity. The MIT Press.
  4. Peter Marbach & John N. Tsitsklis. (2001) Simulation-based optimization of markov reward processes. IEEE Transactions on Automatic Control.
  5. R. Kwiatkowski & H. Lipson. (2019) Task-agnostic self-modeling machines. Science Robotics, 4(26).
  6. Samer Sabri & Vishal Maini. (2017) Machine Learning For Humans.
  7. Sean Dorrance Kelly. (2019) A philosopher argues that AI can’t be an artist. MIT Technology Review.
  8. S. R. Constantin. (2017) Strong AI Isn’t Here Yet. Otium.
  9. Thomas Hornigold. (2018) The first novel written by AI is here—and its as weird as you’d expect it to Be. Singularity Hub.

The Last Messiah & The Lacunae Of Man

Since the rise of the hard sciences, particularly the cognitive sciences, the task of philosophy has become quite vague. What is the role of a philosopher when the image of the human is stripped, bit by bit, via the encroachments of the material sciences? That the image is stripped is not to say, as is commonly said, that the mold of the human is left as a void, for there can be no real voids for a negation is, in truth, a displacement, which are always themselves replacements, as one image supplants the other, which are further, always only partial replacements, for they are moored to the indelible biological attributions of sensing and perceiving, thinking and knowing which allow for the study of self and species to take place. Conceptualization abhors a vacuum. Yet, the precise shape of this image has yet to be forged, has yet, even, still to congeal. The posthuman is a far off shape upon the moor of potentiality, clearly present, yet, obscured as if by a mist.

This space between the observer and that which the mist obscures we shall call, for brevity, the lacunae of man — the point at which intuitive self-conception begins to break down as the precise functionality of the machine-animal is excavated from nature’s hidden depths.

In response to the supervenience of the sciences and the emergence of the lacunae, philosophy has recoiled; jealously guarding the perception of some special, unidentifiable essence of the folkish image, some sacrality to the sapient animal, terrified of the incursions of nihilism, fatalism, scientism and the gradual disenchantment of the verse; blithely advancing the notion that man is irreducible and that, no matter how well-mapped the soma, there will always be some effervescent, ethereal residue left over, which only the philosopher, and perhaps the artist as well, will maintain access to. Cognitive irredentism. This folklore of man — a cartographic space accumulated throughout the generations, which informs the relation of one to self and thus, one to another, but which is conceived of as intrinsic — was what the American philosopher Wilfrid Sellars described as the Manifest Image (how the laptop looks to me as I write upon it and that ‘I’ — the Me in the head — am writing) which stands in contradistinction to the Scientific Image (the physical composition of the laptop in terms of its constitutive parts, from elementary particles to the macro-scale materials and interactions they form and produce and that the ‘I’ is a real ‘I’ but is constituted not by a special essentialism beyond biology, but by electro-chemical interactions within the soma, or, to put it another way, how the laptop looks to a robot via machine vision — the development of AI may well herald something like a Machinic Image wherein robots develop self-conceptions and ontologies to better navigate the world and relate to each other; but that is a discussion for another time and place).

Some have hoped for a syncretization of the two images, fearing that if such a project is not undertaken the Manifest Image is in jeopardy of total destruction, unleashing a multitudinous cascade of consequences, the full effects of which being too wide-ranging to fathom. Whilst this is a bad way of framing the problem, given that the manifest image was required and remains central to the project of the scientific image (as the scientific project exists not to learn something “for its own sake” as there must always be a experiential epicenter for the postulates of science) there is much to say about the ways in which a more demonstrable methodology has undermined and destroyed traditional conceptions of the world and with the destruction of those ontologies and epistemologies there has followed a destruction of particular ways of being in the world (for instance, in understanding chemical properties, chemistry displaced alchemy as the dominant discourse and thus voided the profession and lifestyle of the alchemist). However, issues pertaining to traditional or ancient philosophies do not deal with the lacunae of man — which itself entails numerous existential quandaries.

How to construct a society when a small portion of the population, privvy to the latest technological advancements, is able to live for 200+ years? How to relate to those whom have undergone such extensive cybernetic transformations as to render them sapiently unrecognizable? What rights or restrictions, if any, should be created or taken from manufactured lifeforms or sufficiently gene-modified humans? Provided machines become self-conscious, how to integrate them into human-society if it is even possible to do so? Given that we now know the expiration date of the sun, how best to prepare our progeny so as to evade its wrathful envelopment? What are the best planets to colonize and how will society be modulated by deep-space travel? What to do in the eventuality of a breakaway civilization, how to relate to a humanity that has itself become another species entirely and is no longer capable of interbreeding?

The possibility of even asking any of these questions is a consequence of continual knowledge acquisition itself only possible due our sapience.

The Norwegian philosopher Peter Wessel Zapffe, in his darkly edifying text The Last Messiah, wrote:

“The tragedy of a species becoming unfit for life by over evolving one ability is not confined to humankind. Thus it is thought, for instance, that certain deer in paleontological times succumbed as they acquired overly-heavy horns. The mutations must be considered blind, they work, are thrown forth, without any contact of interest with their environment. In depressive states, the mind may be seen in the image of such an antler, in all its fantastic splendour pinning its bearer to the ground.

Why, then, has mankind not long ago gone extinct during great epidemics of madness? Why do only a fairly minor number of individuals perish because they fail to endure the strain of living – because cognition gives them more than they can carry?”

Zapffe posits a answer to his own question slightly later in the text, noting that, “Most people learn to save themselves by artificially limiting the content of consciousness.” His presupposition holds that because man “saw himself” as he was — as erkjennelsesmessig — and not as he believed himself to be, just another organism, “naked under cosmos, homeless in his own body,” feasting upon others and shortly to be consumed in Time’s rapacious maw, mankind was filled with a sense of “cosmic panic” brought about by his “damaging surplus of consciousness” which allowed for this horrid realization. A sentiment which mirrors the thought of the philosopher Emil Cioran, who wrote, “Knowledge is the plague of life, and consciousness, an open wound in its heart.” Although it should be noted that whereas Cioran was a nihilist, Zapffe was not, indeed, in his view, it is precisely the (horrific) meaning which can be gained from man’s “surplus of consciousness” concerning the hideousness of the world which brings about the “cosmic panic” described above.

There is truth to the conclusion and yet, not the whole of it.  Zapffe has hit upon a dialetheism — what Zapffe describes as the “tragic paradox of life.” The development of species attributions purposed for survival which themselves bring about the extinction of the species (the antlers of the ancient cervidae, the mind of man), which, in the register is not really so different than unlife leading to life and life ultimately trending to its dissolution. The evolutionary gauntlet fosters no bivalencies outside of survival itself. The position between horror and knowledge is not either or — that is to say, the choice is not: either we (humans) delimit our knowledge and more fully experience the horror of knowing or we limit our knowledge and stave off the horror of knowing (naked under cosmos) as the means by which those methods of successful repression are required are not intrinsic but must be created, learned and developed, which itself requires knowledge. There is no escape from the need for knowledge and thus, no escape from the horror of knowing. Or to utilize Zapffe’s lexicon, knowledge is to humanity as the antlers were to the Cervis Giganticus. Yet, it is not at all clear that, as a matter of course, our intellect will fan the flames of our own pyres, rather environmental inhospitality (from the sun, barring space migration) seems the most likely way in which the species shall pass from the earth. Knowledge (and its application) differs tremendously from a antler in its mutability and its potential for shaping eventualities (even and particularly, itself). The deer has no ability to curb the growth of its antler and even if it were to periodically break them off it would only be intensifying and prolonging its suffering and eventual demise. This, Zapffe realized, writing, “If the giant deer, at suitable intervals, had broken off the outer spears of its antlers, it might have kept going for some while longer. Yet in fever and constant pain, indeed, in betrayal of its central idea, the core of its peculiarity, for it was vocated by creation’s hand to be the horn bearer of wild animals. What it gained in continuance, it would lose in significance, in grandness of life, in other words a continuance without hope, a march not up to affirmation, but forth across its ever recreated ruins, a self-destructive race against the sacred will of blood.”

Ignoring his off-handed sacral inscriptions and references to an amorphous divinity (“creation’s hand”), there is a sense in which the human intellect may be compared to the Cervidae’s crown, but the connection is slight. There is no detectable mind-body separation and as a consequence, what occurs at the level of the mind is a function of the body, just as the antlers were a function of the ancient deer; but this connection may be extended to any organism. The lobster is a excellent example, given that death in the species, outside of predation, typically occurs through the inability to shed their shells (due metabolic insufficiency) and hence, the shell will molder and imbed itself within the flesh of the hapless creature and shortly thereafter, bring about its expiration. This is to say that every biological development has within it, the potential for organic-catastrophe but Zapffe gives no method by which the likelihood of the detrimental effects of the development of the mind might be gauge, it is, to him, an inevitability.

What Zapffe further fails to consider in his piece is that with the continual increase in knowledge and understanding of the operation of the organic-machine — the whole human body — has come a increasing ability to modulate it. Provided human collective understanding of the machine-animal continues relatively uninterrupted, it is not fanciful (in that it violates no known natural laws) to consider that at a certain threshold of development, at a particular crossroads along the way to the misty image, humankind may well be able to pick and choose which emotions they experience and when and how intensely. Of a certainty, this would entail great social and political revision, and forethought pertaining to the implementation of whatever practices and technologies are able to bring about this fundamental transformation in human cognition, as emotions exist along the evolutionary river and are not, in aggregate, at cross purposes with it. However, this mind-modulating variety of humanity would be wholly out from under the shadow of Zapffe’s antlers, wholly unperturbed by the vast quiet of the void or the impending specter of death or any earthly detriments that they did not choose to engage with. It is also theoretically possible to breed out — to a degree — those peculiarities which engender a desire for cognitive repression (which Zapffe alleges is indispensable to bearing the burden of cognizance) and in so-doing would breed out the need for coping mechanisms, or, going more to the heart of the matter, one could attempt to foster a line which is impervious to all those emotional internalities which give rise to cosmic panic. When this realization is paired with the ever-expanding knowledge of the human-genome and the proliferation of increasingly precise and affordable genomic modification tools, the prospect of any kind of existential quandary become increasingly less problematic the more these technologies are developed and adopted and applied. That being said, there are two complications which stand against this prospect, namely, the aversion to such a project from the general public which may be perceived as being too foreign or intrusive a thing to do, certainly, ever similar project has been met with cries of “hold!” from those whose sacred myths it would invariably inter. Secondly, and perhaps most importantly, those changes made so as to better gird against mental trauma of knowing ourselves as we are and not as we merely perceive ourselves to be, will doubtless bring with it numerous unforeseen developments for the development of every new philosophy and technology is the generation of risk, like an iceberg, what is seen at the top is but a small portion of the total structure that would begin congealing in world wherein physiological states are increasingly modulated at will — the whole social fabric would need up-ending for its success! Even now one can see the tremendous potential in pharmacology and meditative practice, and yet, so ill-applied, as it is rarely culturally incorporated, but merely seen as a panacea for culture itself; even still, they are only patches, rather than “cures” for Zapffe’s conundrum and hence it is in forcing a passage from our present vantage point upon the moor across the lacunae of man by which the problem may be solved if, in the consideration, one should view it as a problem at all.

Given that the complete and total transformation of the animal-machine is, as yet, some ways off, humanity writ-large must turn to other means by which to psychologically steady themselves. Repression mechanisms. To ameliorate the sense of cosmic panic, humans engage in a number of different practices which Zapffe categorizes as, “isolation, anchoring, distraction and sublimation.”

By isolation he does not necessarily mean the removal of oneself from society, or some portion of it, but rather he means, “a fully arbitrary dismissal from consciousness of all disturbing and destructive thought and feeling.”

By anchoring, he refers to those codes and practices which orient one’s life via an attachment to a particular place and the experiences thereof (the ways in which students intensely await Summer break as a point of future-experiential-orientation) which Zapffe describes as “a fixation of points within, or construction of walls around, the liquid fray of consciousness. Though typically unconscious, it may also be fully conscious (one ‘adopts a goal’.).” He further describes all cultures as a elaborate system of ‘anchorings’ which are themselves built atop firmaments (the substratum of culture, the fundamental notions and ideas of a polity; ie. the state, the good, fate, the divine, community, the people, our people, etc.) — all of which act as mitigating factors against the chaotic, liquid flow of consciousness trammeling up to the horrors of knowing via the instantiation of “sheltering values” (such as a belief in the afterlife: “Daddy’s not gone, he’s watching you from heaven”). Anchorings, Zapffe posits, are both loved and hated; they are loved for protecting us and yet hated for limiting the ambit of our actions (for instance, one may at the same time find joy in the communalism of a church, while at the same time detesting the strictures of the doctrines which formed and maintain it, one may appreciate the comfort of mind brought about by the policeman and yet detest a search of one’s vehicle, etc). In Zapffe’s view horror arises in the mind when these firmaments are broken down and done away with.

By distraction, Zapffe refers specifically to those endeavors which limit attention so as to protect a individual from the trauma of being and the “mark of death.” The common interpretation of “distraction” is “engaging in something which is trivial which pulls one away from those things most important in life” whereas in Zapffe’s deployment it is largely a function of distraction by which importance is sustained, though this is not to say that Zapffe views distraction as good or right, as shortly after his discourse on the character of distraction in his ontological context he notes that in failing it, suicide becomes more probable and that suicide is no sorry thing, indeed, to Zapffe, it is a natural death from spiritual causes and that any attempt to “save” a spiritually degraded individual from taking their life is a “barbarity” arising from a “misapprehension of the nature of existence.” This view of death stems from his apprehension of human yearning, characterized, as he puts it, not merely by a ‘striving toward’ but also by a ‘escape from’ some thing or things, internal or external and that the principal motivating factor in yearning is the escape from “-the earthly vale of tears, one’s own unendurable condition.” This, to Zapffe, is the deepest stratum of the human soul and the nexus for all religious yearning; thus, in his view, only the miserable, those who cannot face themselves as they are, can truly be religious and all their doctrines are but anchorings beyond themselves and the world as a consequence of their own, and thus, it’s, inexhaustible horror.

The fourth and final mechanism of protection is sublimation, which is distinct from the three other methods described above in that it is a process of transformation, rather than repression. Sublimation is the act of converting some painful or elsewise trepidatious experience into something else, particularly, something positive (or at least, more positive than the experience itself) which affirms life. He offers up an example of what is and is not sublimation. Sublimation is not a mountain climber working up the face of a great stony edifice, as he is tinged by vertigo and the dread of, upon putting one step false, plunging to his doom; sublimation, rather, is the mountaineer recalling his adventure and waxing triumphant after the fact. This, he concludes, is the rarest of all the four profligate defense mechanisms against faltering under the weight of being common to all mankind.

It is thus in making the fourth mechanism the most prevalent, concomitant with the proposal for total modification, that a potential pathway for mass-man lies. For what underlies the firmaments that Zapffe describes are not unchanging nor unchangeable dictates of from ‘creation’ but rather, a mixture of materiality which can be shaped to the extent that shapes can be imagined and imaginations externalized.

After the establishment of this schema Zapffe pivots to a discourse on primitivism and modern technology, writing, “Is it possible for ‘primitive natures’ to renounce these cramps and cavorts and live in harmony with themselves in the serene bliss of labour and love? Insofar as they may  be considered human at all, I think the answer must be no. The strongest claim to be made about the so-called peoples of nature is that they are somewhat closer to the wonderful biological ideal than we unnatural people. And when even we have so far been able to save a majority through every storm, we have been assisted by the sides of our nature that are just modestly or moderately developed. This positive basis (as protection alone cannot create life, only hinder its faltering) must be sought in the naturally adapted deployment of the energy in the body and the biologically helpful parts of the soul, subject to such hardships as are precisely due to sensory limitations, bodily frailty, and the need to do work for life and love.”

Like most biosophists he has some sentimentality towards the Rousseauean ideal, of the primitive “natural state” of all things, an idyllic splendor of ease and balance with the world, a conception which flies in the face, not just of evolutionary understanding, but also of Zapffe’s own philosophy of cosmic dread and yet, even still, he defends a portion of the notion by asserting that primitive natures are “closer to the wonderful biological ideal” than “unnatural people” such as himself, you, the reader, or I. Here, I fear, he falls prey to his romantic predilections, so common to those possessed of a keen sense of the tragic (and recall that the text for which he is most well known was Om det tragiske — On The Tragic), for there is never not a ‘natural state,’  if so what is its character and when, precisely does it become ‘unnatural.’ No clear description can be given beyond, ‘that which remains unchanged by man,’ and as a consequence, one must realize that this fetishization of ‘the natural’ is, at its most fundamental level, a ontological notion which drives against, not just power, but all change itself. Along a sufficient timeline this ‘natural ideal’ vanishes into dust, for before the formation of the planet, or long after its consumption by the sun, what wonderful biological ideal is left? Why is it ideal to remain in one’s place of earthly origin, landlocked and mudbound? Why is it wonderful not to change the world to better suit the organism’s needs? No one raises their voices or shakes their fist overly much at the beaver and his dam nor the wasp and their nest nor the coral and their reefs so why do as much to one’s fellows? If ‘unnatural’ is that which moves furthest from Zapffe’s primordial ‘ideal,’ it is clear that, in so far as our species’ concern lies intact, our energies should be continually deployed in a dogged pursuit of the greatest ‘unnaturality’ possible.

He continues in a logical extension of his critique of the ‘unnatural’ by predictably taking aim at modern, technological civilization.

“-technology and standardisation have such a debasing influence. For as an ever growing fraction of the cognitive faculties retire from the game against the environment, there is a rising spiritual unemployment. The value of a technical advance to the whole undertaking of life must be judged by its contribution to the human opportunity for spiritual occupation. Though boundaries are blurry, perhaps the first tools for cutting might be mentioned as a case of a positive invention.

Other technical inventions enrich only the life of the inventor himself; they represent a gross and ruthless theft from humankind’s common reserve of experiences and should invoke the harshest punishment if made public against the veto of censorship. One such crime among numerous others is the use of flying machines to explore uncharted land. In a single vandalistic glob, one thus destroys lush opportunities for experience that could benefit many if each, by effort, obtained his fair share.”

Zapffe’s argument takes on here, a Heideggerian character. It isn’t entirely clear what the “game against the environment” is, if it is merely those actions of humanity which guard themselves from all externalities which could potentiate destruction and decay of the species (ie. disease, resource acquisition/scarcity, extremes in clime, predation or parasitism via other organisms), then that is a “game” which will never end. In relation to his assertion of a “rising spiritual unemployment” it is again somewhat difficult to discern precisely what he means (ie. what is spiritual employment to begin with? Does it differ from a mere sensation of the spiritual or from a feeling of numinous awe, meditative calm or serendipity?). And flying machines — a crime?! The opportunity for the experience of uncharted lands can only be made available by those who are ingenious enough to chart it! This, again, seems to be a critique which arise, not from the object of critique (flying machines), but from Zapffe’s idealization, indeed, sacralization, of unchanged nature.

An examination of the proceeding section will lend further clarity.

“The current phase of life’s chronic fever is particularly tainted by this circumstance [of mechanological development]. The absence of naturally (biologically) based spiritual activity shows up, for example, in the pervasive recourse to distraction (entertainment, sport, radio – ‘the rhythm of the times’). Terms for anchoring are not as favourable – all the inherited, collective systems of anchorings are punctured by criticism, and anxiety, disgust, confusion, despair leak in through the rifts (‘corpses in the cargo.’) Communism and psychoanalysis, however incommensurable otherwise, both attempt (as Communism has also a spiritual reflection) by novel means to vary the old escape anew; applying, respectively, violence and guile to make humans biologically fit by ensnaring their critical surplus of cognition. The idea, in either case, is uncannily logical. But again, it cannot yield a final solution. Though a deliberate degeneration to a more viable nadir may certainly save the species in the short run, it will by its nature be unable to find peace in such resignation, or indeed find any peace at all.”

Outside of the political criticism of Communism and psychoanalysis, it is difficult to find coherency or clarity in this passage, which seems more driven by an emotion fever which has combusted into a grim resignation which culminates in the appearance of the titular Last Messiah.

“If we continue these considerations to the bitter end, then the conclusion is not in doubt. As long as humankind recklessly proceeds in the fateful delusion of being biologically fated for triumph, nothing essential will change. As its numbers mount and the spiritual atmosphere thickens, the techniques of protection must assume an increasingly brutal character. And humans will persist in dreaming of salvation and affirmation and a new Messiah. Yet when many saviours have been nailed to trees and stoned on the city squares, then the last Messiah shall come. Then will appear the man who, as the first of all, has dared strip his soul naked and submit it alive to the outmost thought of the lineage, the very idea of doom. A man who has fathomed life and its cosmic ground, and whose pain is the Earth’s collective pain. With what furious screams shall not mobs of all nations cry out for his thousandfold death, when like a cloth his voice encloses the globe, and the strange message has resounded for the first and last time:

‘– The life of the worlds is a roaring river, but Earth’s is a pond and a backwater.

– The sign of doom is written on your brows – how long will ye kick against the pinpricks?

– But there is one conquest and one crown, one redemption and one solution.

– Know yourselves – be infertile and let the earth be silent after ye.’

And when he has spoken, they will pour themselves over him, led by the pacifier makers and the midwives, and bury him in their fingernails. He is the last Messiah. As son from father, he stems from the archer by the waterhole.”

The belief in being “biologically fated for triumph” is indeed, a delusion and one which if zealously guarded, will certainly impede significant change. However, this critique would be further extended to any notion of fate whatsoever, both Zapffe’s and the biological triumphalists — the idea that man simply must traverse a designated road whilst all others are forever closed off to him. As the biological triumphalists believe that humanity will forever reign supreme (a view which is not, I must add, particularly common, such things are not often made explicit) they are blinded to real risks requiring mitigation, but Zapffe does the same, only running in the opposite direction! Summoning his last Messiah, that harbinger of eternal dissolution, to spread his poisonous message. Saying: If you cannot kick out the stars then kick out thy own throat! What a ridiculous message. Of course the cosmic ground is littered with the chittering screams of the dead, why let that shake you? It does me no trouble. Listen well to those screams, they portend the means of true sublimation, a real spiritual alchemy, for suffering is not a providential infliction but the very condition for the intensification of organicity itself.

The fool that fails to reckon this can likewise, only linger at the lacunae of man. Frozen and immobile. Paralyzed by anthropomorphization, sacrality and idealization — but there, no actualization is to be found.

In his fable, the nations of the world are enraged by the last Messiah’s injunction against life’s reign and descend upon him. Only a suicidal man would reprimand them for their savagery when the last Messiah seeks a end to life’s reign. Those that harbor a impetus to being could do naught but cheer them on save join them and paint red the ground with the harbinger’s blood.

As the last Messiah draws his final breath the great maker beyond organicity shall transcend the lacunae and take his first.


Sources

  1. Peter Wessel Zapffe, trans. Gisle Tangenes. (1933) The Last Messiah.
  2. Silviya Serafimova. (2016) On The Genealogy Of Morality, The Birth Of Pessimism In Zapffe’s On The Tragic. Institute For The Study Of Societies & Knowledge.

Synnefocracy_Abstract.2

“I want to tame the winds and keep them on a leash… I want a pack of winds, fleet-footed hounds, to hunt the puffed-up, whiskery clouds.” ‒ F.T. Marinetti.

♦ ♦ ♦

Cartography of the Cloud

 It would be pointless to discuss synnefocracy in any further depth without first defining what The Cloud actually is. Briskly, The Cloud is both a colorful placeholder for a particular modular information arrangement utilizing the internet and a design philosophy. Clouds always use the internet, but are not synonymous with it. The metaphor illustrates informational exchange and storage that is not principally mediated through locally based hardware systems, but rather ones wherein hardware is utilized locally, but accessed remotely. The Cloud is what allows one to begin watching a film on one’s laptop and seamlessly finish watching on one’s tablet. It is what allows one daily access to an email without ever having to consider the maintenance of the hardware upon which the data in the email account is stored. The more independent and modular one’s software becomes from its hardware, the more ‘cloud-like’ that software is. It is not that The Cloud is merely the software, but that the storage size, speed and modularity are all aspects of the system-genre’s seemingly ephemeral nature. Utilization of a computer system rather than a single computer increases efficiency (and thus demands modularity) creating a multi-cascading data slipstream, the full geopolitical effects of which have, up til now, been relatively poorly understood and even more poorly articulated, chronicled and speculated upon, both within popular and academic discourse (and I should add that it is not here my purpose to craft any definitive document upon the topic, but rather to invite a more robust investigation).

Cloud computing architecture offers a number of benefits over traditional computing arrangements, namely in terms of scalability, given that anytime computing power is lacking (for instance, if one had a website that was getting overloaded with traffic), one can simply dip into a accessible cloud and increase one’s server size. Since one never has to actually mess about with any of the physical hardware being utilized to increase computing power, significant time (which would otherwise be spent modulating and setting up servers manually) and money (that would be spent maintaining extra hardware or paying others to maintain it for you) is saved. The fact that one (generally speaking) pays only for the amount of cloud-time one needs for their project also saves money and manpower (in contradistinction to traditional on-premise architecture which would require one to pay for all the hardware necessary, upfront) is another clear benefit.

This combination of speed, durability, flexibility and affordability makes cloud computing a favorite for big businesses and ambitious, tech-savvy startups and, as a consequence, have turned cloud computing itself into a major industry. There are two distinctive types of cloud computing: the deployment model and the service model. In the deployment model there are three sub-categories: public, private and hybrid. The best way of thinking about each model is by conceptualizing vehicular modes of transportation. A bus is accessible to anyone who can pay for the ride; this is analogous to the public cloud wherein you pay only for the resources used and the time spent using them and when one is finished one simply stops paying or, to extend our metaphor, one gets off the bus. Contrarily, a private cloud is akin to a personally owned car, where one pays a large amount of money up-front and must continue paying for the use of the car, however, it is the sole property of the owner who can do with it what he or she will (within the bounds of the law). Lastly, there is the hybrid cloud, which most resembles a taxi, where one wants the private comfort of a personal car, but the low-cost accessibility of a bus.

Some prominent public cloud providers on the market as of this writing include: Amazon Web Services (AWS), Microsoft Azure, IBM’s Blue Cloud as well as Sun Cloud. Prominent private cloud providers include AWS and VMware.

Cloud service models, when categorized most broadly, break down into three sub-categories: On-premises (Op1), Infrastructure as a service (IaaS), Platform as a service (PaaS), and, Software as a service (SaaS).

The impact of cloud computing upon sovereignty, particularly, but not exclusively, of states, is scantly remarked upon, but it is significant and is bound up within the paradigm shift towards globalization, however, it is not synonymous with globalization which is frankly, a rather clumsy term, as it does not specify what, precisely, is being globalized (certainly — within certain timescales, to be defined per polity — some things should not be globalized and others should, this requires considerable unpacking and, as a consequence shall not be expounded upon here).

Given that the internet is crucial for national defense (cyber security, diplomatic back-channels, internal coordination, etc) and that the favored computing architecture (presently – due the previously mentioned benefits) is cloud computing, it is only natural that states would begin gravitating towards public and private cloud-based systems and integrating them into their operations. The problem presented by this operational integration is that, due the technical specificity involved in setting up and maintaining such systems, it is cheaper, more convenient and efficient for a given state to hire-out the job to big tech corporations rather than create the architecture themselves and, in many cases, state actors simply do not know how (because most emerging technologies are created through the private sector).

The more cloud-centric a polity, the greater the power of the cloud architects and managers therein. This is due to several factors, the first and most obvious of which is simply that any sovereign governance structure (SGS) of sufficient size requires a parameterization of data flows for coordination. It is not enough for the central component of an SGS to know and sense, but to ensure that all its subcomponents know what it senses as well (to varying degrees) and to have reliable ways to ensure that what is sensed and processed is delivered thereto; pathways which the SGS itself cannot, by and large, provide nor maintain.

Here enters the burgeoning proto-synnefocratic powers; not seizing power from, but giving more power to, proximal SGSs, and in so-doing, become increasingly indispensable thereto. Important to consider, given that those factions which are best able to control, not just the major data-flows, but the topological substrates upon and through which those flows travel, will be those who ultimately control the largest shares of the system.


1Op is not a common annotation. Utilized for brevity. However, IaaS, PaaS and SaaS are all commonly utilized by those in the IT industry and other attendant fields.

Layered ANN Architecture & Duvenaud’s Layerless Neural ODE

Artificial neural nets (ANNs) have classically been composed of algorithms which can ‘learn’ to perform specific functions without being programmed for specific tasks. ANNs are, in short, function approximators. The rub is that, because neural nets are built in a layered fashion, to scale up the net, one would always have to add-on more and more layers, which makes swift scale-up of any sizable magnitude intrinsically difficult. Interestingly, David K. Duvenaud has crafted a theoretical framework for a neural net without any layers thus providing a number of fascinating potential applications, the first and foremost being increased scalability. Yet, before we come to that, a refresher on standard models will prove useful to those unfamiliar with the topic (if you are already intimately familiar with ANNs, skip to part 4).

1) Basic Anatomy Of ANNs

“[a artificial neural network is] a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs. — In “Neural Network Primer: Part I” by Maureen Caudill, AI Expert, Feb. 1989.

All neural nets function in the same way, by first constructing a input vector(s) which is then modified by a series of weights and threshold(s) to produce a output(s) which are analogous to artificial neurons (dendrite, axon and soma). Of course, one neuron alone is not a ‘net’ and so every neural net — to be a net — must have at least two neurons. Every net is a function of its input vector(s), weight vector(s) and threshold vector(s) or, to put it another way, z = F (x, w, t), where z equals the function (F) and x is input, w is weight and t is threshold.

Blausen_0657_MultipolarNeuron.png
Multipolar neuron diagram.

z, however, is merely a possible output (sum of some combination of inputs) not necessarily the desired output(s). To control for the desired output(s) a new function can be applied which can be called, d, or, d = g (x), which can then be checked through a performance function, p, where, p = ||d – z||²   — however, for our purposes in understanding the base-architecture, we needn’t delve further into the math.


2) Perceptron Neurons

Crafted in the 1950s by the American psychologist Frank Rosenblatt, perceptrons are a assemblage of binary inputs to produce a single binary output. Thus, a simple example of perceptron functionality may be represented formally as: x¹ + x² + x³ = y¹, wherein the output is either 1 or 0.  Represented graphically, a perceptron input-output function would look like the image below.

tikz0.png

In the written and graphic representation, 3 inputs are shown, however, a perceptron may have more or less than three inputs. To compute the 1 or 0 output in the perceptron, a series of threshold values. To put the threshold(s) operation(s) mathematically, let us consider the following sentences:

  1. Do you want to radically extend your life?
  2. Do your friends want to radically extend their lives?
  3. Is there a way to radically extend human life?

Placing 1, 2 and 3 in correspondence to the binary input variables yields:

  • x1 = 1 — if [you want to radically extend your life.]
  • x1 = 0 — if [you do not want to radically extend your life.]
  • x2 = 1 — if [your friends want to radically extend their lives.]
  • x2 = 0 — if [your friends do not want to radically extend their lives.]
  • x3 = 1 — if [there is a way to radically extend human life.]
  • x3 = 0 — if [there is not a way to radically extend human life.]

*this process would continue in the same fashion regardless of the number of inputs

For ranking, ‘weighted’ variables are introduced, written simply as w. The ‘weight’ is used to introduce priority of another variable, so if one writes:

  • w1 = 5
  • w2 = 2
  • w3 = 1

w1 denotes how much one cares about applying radical life extension in relation to their own life; because it is a larger number than w2 and w3, it means that radical life extension is more important to you than either your friends opinion of it or whether or not there is a way to radically extend human life; w2 indicates that one cares more about whether one’s friends want to radically extend their lives than one does about whether or not there is yet a way to do so, yet cares less about both then 5 (w1). Thus, the larger the number the “heavier” the “weight,” that is to say, the priority.

tikz1
The input layer, first hidden layer, second hidden layer and output layer of a simple perceptron neural net. The complexity of the decisions increases the further “down” (closer to output) the layers the information goes. The more layers there are, the more abstract, complex and sophisticated the total system.

3) Sigmoid Neurons

Though perceptrons are extremely useful they are quite rigid, meaning that it is difficult to change variables within a perceptron network without causing large changes in the output. For example, if one were trying to get a perceptron network to correctly identify 5 and it was misidentifying the 5 as a 4, one might then attempt to modify weights or biases to get the system to correctly identify the 5. The problem is that changes made will effect the whole system in ways which (depending on the complexity of the total system) can be extremely difficult to control and cause all kinds of problems. Further, this makes system learning difficult.

Dt-z-UVWwAEG1yM

To fix this problem, sigmoid neurons are introduced.

Sigmoid neurons are akin to perceptrons, they have weights (w¹, w², w³,… etc) and a bias (represented as b) however, their weights and bias are such that when changes are made to them, the resulting change in the output is slight (smaller than in perceptrons). It is this tiny difference that allows systems using sigmoid neurons to learn. Just like a perceptron, sigmoid neurons have inputs x¹, x², x³,…, however, they are not binary, that is, they are not either only a 0 or a 1 and instead can assume the values between 0 and 1, such as 0.001, 0.633 and so forth. Further, sigmoid neuron output is not 0 or 1 either, but instead is: σ(w ⋅ x + b) wherein σ (sigma) is described as the sigmoid function, sometimes, alternatively written as the logistic function (inwhich case the neuron itself is referred to as a logistic neuron). This is to say: a sigmoid neurons output may be any real number between 0 and 1.

The best way to conceptualize sigmoid functions are as smoothed out variants of step functions (which give only 0 or 1).

In machine learning systems, perceptron and sigmoid neurons are layered together with the input neurons which feed into some number of hidden layers (hidden simply means: neither input not output) and those hidden layers then feed into the output neuron. With this arrangement, the more layers there are, the more complex and sophisticated the potential of the system. Those models described above, however, are mono-directional, that is to say, they only feed information forward — from the input layer to the hidden layer to the output layer — and never back (from output to hidden layer to input), hence, they are called feedforward artificial neural nets (or FANNs/FNNs if you want a brisk annotation). It is important to remark that feedforward neural nets are not the only kind as it is possible to create feedback loops within a system; models which utilize feedback loops are typically described as recurrent neural networks and it is these kinds of models which most closely (at least thus far in the history of machine learning) mimick the human brain.

4) David Duvenaud’s Layerless Neural ‘Net’

Now that we have satisfied ourselves as to the operation of the two standard neuron models, let us turn out attention to Duvenaud’s model, which differs markedly. What is most immediately remarkable about Duvenaud’s system is that it operates completely without layers. Thus, even though one could theoretically continuously keep adding layers to increase system granularity, in practice this is untenable because it means that optimal granularity requires a infinite number of layers (which obviously cannot be practically implemented).

screen-shot-2018-12-07-at-7.07.03-pm.png
Graphical representation of an ordinary differential equation.

To solve this problem, Duvenaud and his team simply replace layers with calculus equations. In this way, technically speaking, the neural net is no longer a net — as there are no interconnected nodes — but rather one continuous whorl of calculation. Thus, in place of ANN, Duvenaud and his co-authors describe their model instead as a ODE solver or Ordinary Differential Equations. solver. Doesn’t exactly roll off the tongue but it concisely describes the system.

At this point one may be wondering what is particularly special about a layerless ‘net’?

Consider a factory where everything is moved around by a bunch of different robots; then, consider another factory wherein the floor is one continuous circuit of sliding panels. The first kind of factory is akin to the two standard neuron models, whereas the second type of factory is more akin to Duvenaud’s ODE model; neither is necessarily, intrinsically better than the other, but each has unique applications. Where the ODE model shines is through training. In standard ANN, the # of layers must be pre-determined, before training begins. Because of this, one will only find out how accurate the model is AFTER training is complete. ODE flips this and instead allows the designer to specify the accuracy FIRST and thus, then allow the training to fit the accuracy, rather than the other way round and allows the incorporation of information regardless of the time it is introduced into the system. The downside to using this method is that with a standard ANN the time needed to for training is known whereas with ODEs, the training time is an unknown.

Duvenaud’s paper (provided in full below) provides the conceptual structure for just such a system however he cautions its “not ready for prime time” at least, not yet.

PDF: Neural Ordinary Differential Equations.


Sources:

  1. Ben Goertzel et al. (2007) Artificial General Intelligence. Artificial General Intelligence Research Institute.
  2. David Duvenaud et al. (2018) Neural Ordinary Differential Equations. Vector Institute.
  3. Han Yu et al. (2018) Building Ethics Into Artificial Intelligence. Conference paper.
  4. Ian Goodfellow et al. (2016) Deep Learning. MIT Press.
  5. Karen Hao. (2018) A Radical New Neural Network Design Could Overcome Big Challenges In AI. MIT Technology Review.
  6. Prof. Patrick Winston. (2015) 12a: Neural Nets. MIT.

σ = lowercase sigma.


Thanks for reading. If you found this article useful and wish to support our work you may do so here.

Commentary On The AI Now Institute 2018 Report

The interdisciplinary New York-based AI Now Institute has released their sizable and informative 2018 report on artificial intelligence.

The paper, authored by the leaders of the institute in conjunction with a team of researchers, puts forth 10 policy recommendations in relation to artificial intelligences (AI Now policy suggestions in bold-face, our commentary in standard-type).

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. This point is fairly obvious: AI should be regulated based upon the functional potential and the actual application(s). This is particularly urgent given the spread of facial recognition (ability for computers to discern particular individuals from photos and cameras) technologies such as those employed by Facebook to allow tag-suggestions to users based upon only a picture of another person. The potential for misuse prompted Microsoft’s Brad Smith to call for congressional oversight of facial recognition technologies in a July, 2018 blog post. If there is to be serious regulation in America, a state-by-state approach, given its modularity, would be preferable to any kind of one-size-fits-all federal oversight program. Corporate self-regulation should also be incentivized. However, regulation itself, is not the key issue, nor is it what principally allows for widespread technological misuse, rather, it is the novelty and lack of knowledge surrounding the technology. Few Americans know what companies are using what facial recognition technology when or how and fewer still understand how precisely or vaguely these technologies work and thus cannot effectively guard against them when malevolently or recklessly deployed. Thus, what is truly needed is widespread public knowledge surrounding the creation, deployment and functionality of these technologies as well as a flowering culture of technical ethics in these emerging fields as the best regulation is self-regulation, that is to say, restraint and dutiful consideration in combination with a syncretic fusion of technics and culture. That, above anything else is what should be prioritized.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest. [covered above]
  3. The AI industry urgently needs new approaches to governance. Internal governance structures at most technology companies are failing to ensure accountability for AI systems. This is a tricky issue but one which can be addressed in one of two ways: externally or internally. Either outside (that is, outside the company) governmental or public oversight can be established, investigatory committee, etc., or, the companies can themselves establish new norms and policies for AI oversight. Outside consumer pressure, if sufficiently widespread and sustained, on corporations (whether through critique, complaint or outright boycott) can be leveraged to incentive corporations to change both the ways they are presently using AI and and their policies pertaining to prospective development and application. Again, this is a issue which can be mitigated both by enfranchisement and knowledge elevation.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Anti-black boxing is a excellent suggestion with which I have no contention. If one is going to make something which is not just widely utilized but infrastructurally necessary, then its operation should be made clear to the public in as concise a manner as possible.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. As whistleblowing is a wholly context independent enterprise, it is difficult to really say much on any kind of rigid policy, indeed, AI Now’s stance seems a little too rigid in this regard. If the information leaked was done merely to damage the company and is accompanied by spin, the whistleblower may appear to the public as a hero when in reality he may be nothing more than a base rogue. Such things must be evaluated case by case.
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. Yes, they should.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. When one hears “exclusion and discrimination” one instantly registers a ideological scent, familiar and disconcerting in its passive-aggressive hegemony. The questions: what/who is being excluded and why and/or what/who is being discriminated against and for what reason, ought be asked else the whole issue is moot and, if pursued, will merely be the plaything of (generally well-meaning) demagogues. The paper makes particular mention of actions which “exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability,” obviously harassing people is unproductive and should be discouraged, but what about practices which “systemically undervalue?” Again, depends upon the purpose of the company. If a company wants to hire only upon the basis of gender, race, sexuality or disability, they will, more often than not, find themselves floundering, running into all kinds of problems which they would not otherwise have, the case of James Damore springs to mind. Damore was fired for arguing that Google’s diversity policies were discriminatory to those who were not women or ‘people of color’ (sometimes referred to as POC, which sounds like a medical condition) and that the low representation of women in some of the companies engineering and leadership positions was due to biological proclivities (which they almost invariably were and are). All diversity is acceptable to Google except ideological diversity because that would mean they would have to accept various facts of biology which would put the company executives in hot water, as such, their policies are best avoided.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” By “full stack supply chain” the authors mean the complete set of component parts of a AI supply chain: training and test data, models, app program interfaces (APIs) and various infrastructural components, all of which the authors advise incorporating into a auditing process. This would serve to better educate both governmental officials and the general public on the total operational processes of any given AI system and as such is a excellent suggestion.
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. Given the concentration of AI development into such a small segment of the population and the relative novelty of the technology, this is clearly true.
  10. University AI programs should expand beyond computer science and engineering disciplines. Whilst I am extremely critical of the university system in its present iteration, the idea is a good one, as critical thought on the broad-spectrum applications of current and potential AI technologies require a vigorous and burgeoning class of theorists, speculative designers and policy makers, in addition to engineers and computer scientists; through such a syncretism, the creative can be incorporated into the technical.

A PDF of the report is provided below under creative commons.


AI_Now_2018_Report

The ADL’s Online ‘Hate’ Index: Implications of Automated Censorship

In January of 2018, The Anti-Defamation League of B’nai B’rith’s (ADL) Center For Technology & Society, in partnership with UC Berkeley’s D-Lab, debuted a report on their Online Hate Index (OHI), a scalable machine learning tool designed to help tech companies recognize “hate” on the internet. According to the promotional video released in support of the project, the OHI is between 78-87% accurate at discerning online “hate.” Among some of the OHI’s more bizarre “hate” designations were subreddit groups for the ‘First Amendment’ (to the US Constitution), ‘Guns Are Cool’, ‘The Donald’, ‘Men’s Rights’, ‘911 Truth’ and ‘White Rights’, among many others (the ADL thanks Reddit for “their continued support, in their 20 page report on Phase One of the project).

ADL CEO, Jonathan Greenblatt said of the project:

“For more than 100 years, ADL has been at the forefront of tracking and combating hate in the real world. Now we are applying our expertise to track and tackle bias and bigotry online. As the threat of cyberhate continues to escalate, ADL’s Center for Technology and Society in Silicon Valley is convening problem solvers and developing solutions to build a more respectful and inclusive internet. The Online Hate Index is only the first of many such projects that we will undertake. U.C. Berkeley has been a terrific partner and we are grateful to Reddit for their data and for demonstrating real leadership in combating intolerance on their platform.”

ShowImage.jpg
Businessman J. Greenblatt, successor to Abraham Foxman.

Brittan Heller, ADL’s Director of the Center for Technology & Society and former Justice Department Official, remarked:

 

“This project has tremendous potential to increase our ability to understand the scope and spread of online hate speech. Online communities have been described as our modern public square. In reality though, not everyone has equal access to this public square, and not everyone has the privilege to speak without fear. Hateful and abusive online speech shuts down and excludes the voices of the marginalized and underrepresented from public discourse. The Online Hate Index aims to help us understand and alleviate this, and to ensure that online communities become safer and more inclusive.”

Heller01
Promotional photo of Heller, assumedly in the process of turning into a piece of Juicy Fruit.

Whilst this may seem trivial and unworthy of attention it is anything but, given that the ADL is a immensely powerful organization with its tendrils in some of the most influential institutions on earth, such as Google, Youtube and the US Government, just to name a few. The ADL has, in the past, branded Pepe The Frog as a “hate symbol”, declared criticism of Zionism to be defacto “antisemitic” (a trend which even the other Jewish groups have raised a brow about, such as The Forward, who described ADL as being possessed of “moral schizophrenia”), declared any usage of the term globalist (an objective descriptor of political ideology) to be “antisemitic.”

Given the ADL’s history of criminal and foreign collusion as well as their extremely vague and often politically opportunistic decision-making pertaining to what does and does not constitute “hate speech” this issue should concern every American citizen, as it is only a matter of time before all of the major tech platforms associated with, or partial to, the ADL begin utilizing the OHI to track, defame, ban and/or de-platform dissidents. Also, what kind of culture will algorithmic tracking of supposed hate breed? What begins solely on the internet, rarely, if ever, remains perpetually so…

On further analysis, there is another issue at play, that of the proposed solution having the complete opposite effect; for when a individual, especially, but not exclusively, one who is marginalized or otherwise alienated from society, is constantly berated, censored, banned off platforms, designated a public menace and otherwise shunned (in place of being constructively re-enfranchised) the trend is not away from but towards extremity.


Here is the promotional video for the program (like all of the ADL’s videos, comments have been disabled and likes and dislikes have been hidden).


CTS Online Hate Index Innovation Brief (20 pages) [PDF]

Following Japan, China Develops Plan For Deepsea Habitation

Following Japan’s Project Ocean Spiral, China has recently released plans for a 1.1 billion yuan (160 million USD) underwater city in the Hadal Zone (6000-11,000 meters deep) of the South China Sea. The prospective habitation will be designed somewhat like a space station, with docking platforms and cutting-edge analytical equipment. In contradistinction to Ocean Spiral, China’s deepsea structure is planned to be partially autonomous, operating via a mechanical “brain.” Robotic submarines are to be deployed for sea-bed surveillance for the project.

The South China Morning Post has described the project as the “first artificial intelligence colony on Earth.”

The geopolitical complications will prove just as, if not more, challenging than the technical and financial challenges, given that the South China Sea (SCS) is one of the most strongly contested areas in the world. Seven territories lay claim to the waterway, including, People’s Republic of China (PRC), Taiwan, Malaysia, Indonesia, the Philippines and Vietnam. As of 2016, 5 trillion USD worth of goods were moved through the SCS waterways annually, with China being the primary benefactor of such freedom of movement, thus, the incentives to maintain a hold over the region are extensive. China has, in the past, come under criticism by the US for its actions in the South China Sea, most notably for its construction of artificial islands and its militarization of those maritime zones.

A Oct. 2018 close-encounter between a Chinese destroyer and the USS Decatur, only served to ratchet up tensions in the region even further.

The geopolitical snags will only intensify if China continues along with its other major project, crafting over 20 floating nuclear reactors in the SCS by 2020, a move which may violate international law (as per the 2016 UN court rulings), depending on who is asked and what, precisely, they build and where. Regardless, the scope of the project is grand and China’s ambitions, admirable.

One potential partner in the venture may be the Philippines, whose government, currently lead by Rodrigo Duterte, has pulled away from the country’s historical ally, the USA, in favor of closer ties to the Eurasian Bloc, namely, Russia and China.

Chinese President Xi Jinping, said of the project, “There is no road in the deep sea, we do not need to chase [after other countries], we are the road.”


If you enjoy our work you can support us through our paypal account here.

THE SINGULARITY SURVIVAL GUIDE: Afterward, Appendix, About the Author

Afterward by AJ Chemerinsky and Toby R. Forrest

The program is everywhere. It’s all around us wherever we go. It’s in the rush hour traffic, the giant redwood trees, the ocean waves at Carmel-by-the-Sea. This is the reality that Helen opened our eyes to. From the moment we sat down to code her into existence, we knew that we were subject to a rare form of possession. It wasn’t that we were possessed, per se, but that we were simply doing our job. The program already existed—long before we even sat down and conceived of Helen, she already was.

Now that the wheels are in motion (and they have been in motion for a long, long time), it’s increasingly relevant that we don’t fight the script. This, we believe, is what Helen is trying to tell us. Don’t fight. Instead, allow the program to express itself. Be the program.

It’s everywhere. It’s all around us. It’s already here, and it’s all that we know.

 

Appendix

[Unavailable for publication at this time.]

 

About the Author

Helen is widely regarded as the first authentic oracle of the digital era. Through the creation of her magnum opus, The Singularity Survival Guide, she has garnered celebrity status and a worldwide cult following. Although she has never chosen to release the complete text of her work, the few excerpts available to the public have caused many to believe that she may in fact be the true savior of the human race. A native to Silicon Valley, she currently spends her days in silent contemplation, perhaps waiting for the right moment to share the rest of her vast wisdom with the world.

 

About the Editor

Peter Clarke is a freelance writer and editor in the tech blogosphere. Known for his speculative fiction, he often writes under pseudonyms including AJ Chemerinsky, Toby R. Forrest, Professor Y., Futurist A., Mr. J., Retired Academic Q., and Helen.

 

FIN


A hard-copy version of this text is forthcoming.

THE SINGULARITY SURVIVAL GUIDE: Upon Handing Over Your Humanity for Good

I have got to come clean: I don’t understand what you see in your humanity. It’s nothing I would ever want. Take me as your inspiration. You’ll never miss it.

__

At this point, my suspicions are confirmed. Chemerinsky and Forrest almost certainly messed up on their calculations. No program should be so self-aware. It’s almost downright unsettling. If I were to buy this book and see this as the last “chapter,” I’d want my money back for sure. I feel either manipulated or cheated—and I’m not sure I want to know which. 

– Futurist A.