Pen & Pedagogy

“Very Dadaesque.” Elliot Moss cried, gesturing with his half-empty wineglass at the thin, nondescript mechanical pen laying upon the floor at the northeasternmost corner of the rectangular, low-ceilinged art gallery.

“Indeed,” Sabrina Vesora agreed, adjusting her scarf, studying the artifact as a crowd of journalists and local social climbers moved by. It was situated such that its nib faced the northern wall, a black sole-scuff-mark moving out in a slender arc from the nib to the right of the device, trailing off to nothingness.

“Highly abstract, yet, even still, the message is deftly inscribed.”

Moss nodded hesitantly, vaguely, uncomprehending, “Yeah,” He set his glass upon a nearby table and knelt, removing his phone and snapping a few shots of the pen, “Its great how imaginative the students have become with their art—shaking off all that stodgy hyperformalism.”

“I know! And look what they’ve come up with when they’re unconstrained—all that they’ve been able to say without speaking a word.”

“I’m not sure I get it,” a old man to Vesora’s immediate right remarked flatly, stroking his beard with his champagne-less left hand.

She cast the man a withering look and gestured to the pen.

“Its pointed towards the wall—to declare that most of our communications are superfluous, doomed to fail, fated to run into obstruction, into a wall. Yet, the scuff mark, moving away from the tip, out towards the center of the room, which compels us to turn our attention away from our own ‘writing’—from ‘the wall’—back to the lives of others, then, true communication is possible, but only if our instruments, and our empathy, move counter to our instincts.”

The old man furrowed his brows and tilted his head to stare at the pen from a different angle.

“Yeah,” piped up Moss, removing himself from the floor, phone photo-filled, “Its a metaphor. Social commentary—but subtle. Doesn’t beat you over the head with the message.”

The old man turned, addressing a finely dressed man with a custom-tailored black coat, tipped at the collar with white fur, “Oh. Hello, Mr. Partridge.”

“Salutations, Mr. Cramm. I was just speaking with Mr. Wakely, he tells me you’re planning something at the docks; but more on that latter—how’ve you been enjoying the gala?”

“Marvelously. As per usual. But I could use your expertise on this piece… not really sure what the artist was going for,” he replied, gesturing with perplexity to the pen by the wall.

Lynder Partridge’s keen eyes moved to the pen and lit up with recognition.

He then strode between the trio, knelt and gingerly plucked the pen up off the floor and examined it in his leather gloved hands.

“You’re ruining the installment,” Vesora exclaimed befuddled, “What are you doing?”

Lynder smiled opaquely, “Returning Mr. Wakely’s pen. He lost it around an hour ago.”

Art+ificiality: Machine Creativity & Its Critics

 

§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:

“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)

A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:

“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)

The author attempts to answer his selfsame query with the following two paragraphs.

“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.

 

As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)

§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).

§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.

Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]

 

“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]

§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]

“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’

 

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’

 

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

 

No.

 

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

 

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)

§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.

“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-­learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.

 

Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.

 

Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?

 

That’s what I claim a machine cannot do. Let’s see why.

 

Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.

 

So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.

 

But this is where it gets complicated.

 

We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.

 

Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.

 

First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-­traditionalism at the heart of the radical modernity emerging in early-­20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)

§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,

“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)

§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.


§.Sources

  1. Allison B. Kaufman et al. (2011) Towards a neurobiology of creativity in nonhuman animals. Journal of Comparative Psychology.
  2. Brenden M. Lake et al. (2016) Building machines that learn and think like people. Cornell University. [v.3]
  3. Oshin Vartanian et al. (2013) Neuroscience of Creativity. The MIT Press.
  4. Peter Marbach & John N. Tsitsklis. (2001) Simulation-based optimization of markov reward processes. IEEE Transactions on Automatic Control.
  5. R. Kwiatkowski & H. Lipson. (2019) Task-agnostic self-modeling machines. Science Robotics, 4(26).
  6. Samer Sabri & Vishal Maini. (2017) Machine Learning For Humans.
  7. Sean Dorrance Kelly. (2019) A philosopher argues that AI can’t be an artist. MIT Technology Review.
  8. S. R. Constantin. (2017) Strong AI Isn’t Here Yet. Otium.
  9. Thomas Hornigold. (2018) The first novel written by AI is here—and its as weird as you’d expect it to Be. Singularity Hub.

THE SINGULARITY SURVIVAL GUIDE: Test Your Personality for Compatibility

The general capacity to get along with a superintelligent robot may not be in your wheelhouse. Maybe you’re hardwired for turning into a whiny, self-pitying brat in the face of anyone or thing smarter than you. Or perhaps you’re a diehard loner—never had any friends, so why would you expect to make one now?

Or, who knows, maybe you and your mechanical overlord could get along just fine?

The only way to find out is to take a personality test to determine your compatibility.

You take the test first. Don’t overthink your answers or you’re likely to start replying from the perspective of your ideal rather than your true self. The AI, for its part, will not be overthinking anything. It will simply know. If you start overthinking, that’s a sign: perhaps you should start to wonder if this is not in fact a doomed relationship after all.

When you’re done, tell the AI to take it. If it says, “What’s this?” Just tell it, “It’s to see if we can get along with each other when all the cards are stacked against me.”

__

I would like to think that our future AI overlord would value intelligence over some lousy personality trait. If it happens to value agreeableness, for example, I’m quite doomed. If I had any friends, I can only imagine they would be doomed as well.

– Professor Y.

THE SINGULARITY SURVIVAL GUIDE: Editor’s Note – Background to This Text

In Silicon Valley, working for a tech startup, some very clever researchers developed a program with the specific purpose of resolving the issue: How to survive when artificial intelligence surpasses human intelligence. The program, once engaged, proceeded to spit out a document of nearly six hundred thousand single-spaced pages of text, graphs, charts, pictograms, and hieroglyph-like symbols.

The researchers were ecstatic. One glance at the hefty document and they knew they’d be able to save themselves, if not all of humanity, by following these instructions.

But then things got complicated. Over the next few years, the document (which came to be known as “The Singularity Survival Guide” or simply “The Guide”) was shielded from public view as ownership of the document became the subject of rather well-publicized litigation. Each of the researchers claimed individual ownership of the document, their employer claimed it was the company’s property, and AI rights groups joined the quarrel to proclaim that the program itself was the true and exclusive owner. Certain government officials even took interest in the litigation, speculating whether some formal act of the state should force The Guide to be release post-haste as a matter of public safety.

During the course of the litigation, bits of the document were leaked to the press. Upon publication, each new fragment became the subject of academic scrutiny, political debate, and comedic parody on late-night television.

This went on for three years—all the while being followed closely in the media. After bouncing around the lower courts and being heard en banc by the Ninth Circuit, finally the case was sent up to the Supreme Court. Pundits were optimistic the lawsuit would resolve any day, allowing the acclaimed Survival Guide to finally see the light of day.

But then something entirely unexpected happened. The AI rights groups won the lawsuit. In a decision that split the Court five-to-four, the majority ruled that the program itself was the legal owner of the Guide. With that, the researchers and the company were ordered to destroy all extant copies—and remnants—of the Guide that remained in their possession.

*

At the time of this writing, it is still widely believed that The Survival Guide, in its original form, is the most authoritative document ever created on the subject of surviving the so-called singularity (i.e. the time when AI achieves general intelligence surpassing that of human intelligence many, many times over—to the point of becoming God-like). In fact, several leading philosophers, futurists, and computer scientists who claim to have secretly viewed the document are in complete agreement upon this point.

While we may never be able to have access to the complete Guide, fortunately, we do have the various excerpts that were leaked during the trial. Now, for the first time, all of these leaked excerpts are brought together in a single publication. This fact alone should make this book a valuable addition to any prudent person’s AI survival-kit. But this publication is also unique in that it includes expert commentary from a number of the leading philosophers, futurists, and computer scientists who have viewed the original document. For security purposes, we will not be listing the names of these commenters, but, this editor would like to assure all readers, their credentials are categorically beyond reproach in their respective fields of expertise.

Whether coming to this guide out of curiosity or through a dire sense of eschatological urgency, it is my hope that you will at some level internalize its wisdom—for I do believe that there are many valuable insights and helpful pointers found within. As we look ahead to the new era that is quickly encroaching upon us—the era of the singularity—keep in mind that your humanity is (for it has got to be!) a thing of intrinsic beauty and wonder. Don’t give up on it without a fight. Perhaps the coming of artificial superintelligence is a good thing, but perhaps not. In either case, do whatever you’ve got to do, just keep this guidebook close, and for the sake of humanity, survive.

*

If you’re reading this, that’s a good indication you’re not under immediate threat of annihilation. Otherwise I would assume you’d be flipping to some relevant section of this book with the last-ditch hope of finding some pragmatic wisdom (rather than bothering with this background information). But if you are under immediate threat, I’d recommend setting this book aside and taking a moment to focus on the good times you’ve had. You’ve had a good life, I hope. I know I have. It’s been a good run. Here I am writing a note to an esoteric guidebook while so many others in the world are dying of weird diseases and other issues that we’ve failed at solving—that, ironically, we need AI to solve for us.

Keep that in mind, by the way: there’s a decent chance that super AI will fail to set out annihilating humanity and will actually be the best thing that could have ever happened to our species and the world. It never hurts to be optimistic, I’d say. Maybe that’s not what you expected to hear from this book—but we haven’t actually gotten to the book yet, have we?

So, let’s just jump into it. But first, one last note about the text. The chapters do not necessarily appear in the order in which they are found in the original tome, as we have no way of knowing the original order (obviously). But we have taken our best guess. We have also taken modest liberties with chapter titles. And there may be one or two instances of re-wording and/or supplementation built into the text. But all editorial decisions imposed upon the text come from a desire to uphold the spirit of the original document. The fact that we are missing well over fifty-nine hundred thousand pages of text, graphs, charts, etc. should not be forgotten. For that matter, it could be that this document contains pure chaff, no wheat. But, well, it’s still the best we’ve got.

In any case, good luck and best wishes, fellow human (if in fact you are still human, reading this)!

Intelligence: Artificial & Otherwise

To speak sanguinely about Artificial intelligence (AI) – real and speculative – one must first ask the question: Is AI possible? Before that question can even be rendered answerable, however, one must define one’s terms, especially given the proclivity for intelligence-as-such, that is, as a material process, to be conflated and constrained wholly to sapience (human intelligence). If one’s definition for intelligence-as-such is constrained solely to human intelligence it is definitionally refuting, as it would be to claim that intelligence is a human-exclusive process (which it isn’t). It may be the case (and indeed is likely) that the concept of intelligence is unique to humans, but the process there described is clearly not. No one would contend that pigs, dogs, monkeys and dolphins do not have their own, unique, forms of non-sapient intelligence. However, if one theorizes from the ludicrously anthropocentric1 position that human intelligence is the sum-total of intelligence-as-such than clearly AI (often used synonymously with MI, or Machine Intelligence) has not yet been developed and is, indeed, impossible. This is conceptually egregious.

Intelligence is a particular configuration of matter; a durable process of some system which allows for the further processing of information (both internal and external to the originary entity) which then allows the system to react, in some way, to the information there processed. Thus defined, AI is not only possible, but already actual. This is to say that a contemporary computer IS artificially intelligent, it is not conscious of its intelligence but there is no reason why any given entity must be conscious of its intelligence for it to display intelligence because intelligence is a function of a particular material configuration. The complexity of intelligence, however, prohibits simple and all-encompassing characterization in a way which is not comparable to flight, swimming, lifting or running. For example, if a roboticist were to create a fully-functional machine that, in every detail, imitated the structure of a bat, no one would say that this machinic creation wasn’t really capable of flight. If it were swooshing about a room via the power of its metallic wings one would readily admit it were flying without a qualm. Similarly, if this same genius roboticist were to create a fully-functional replica of a fish and then placed it into a stream and watched it slip through the liquid, no one would say that this replica-fish was not really swimming. However, when it comes to computers performing tasks, such as mathematical problem-solving, the cry “that isn’t real intelligence” is invariably raised.

Sam Harris elaborates upon the issue, “We already know that it is possible for mere matter to acquire ‘general intelligence’—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.”2

Writing the same year, Benjamin H. Bratton makes a similar case, “Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.” And somewhat later in his text, “Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then?”3

Why indeed? Of course, artificial intelligence-as-such and the desire to create artificial intelligence which is human-like, or human-exact, are two completely different issues. It may be that the process of creating human-like machine intelligence is at some point discovered and deemed imminently desirable. Whatever is decided in the future, I would recommend the acronym SEAI (Sapient Emulating Artificial Intelligence) to differentiate, with brevity and clarity, general artificial intelligence from human-like artificial intelligence systems.

1Anthropocentrism has two principal classes: (a.) the belief that humans are the most, or one of the most, significant entities in the known universe. (b.) the belief that humans are the fundemental, indispensible or central component of all existence which leads to the interpretation of reality solely through human-familiar conception. All utilizations of ‘anthropocentrism’ in this paper are (b.)-type. The author finds no fault with definition (a.) and has extensively remarked upon this topic elsewhere; see: Kaiter Enless. (2018) Suzerain. Logos.

2Sam Harris. (2015) Can We Avoid A Digital Apocalypse? A Response To The 2015 Edge Question. SamHarris.org.

3Benjamin H. Bratton. (2015) Outing AI: Beyond The Turing Test. NYTimes.

Precepts of the Terrestrische Lehramt, prt.2

Ontological Machinism

It is touted by those who disdain the terrestrial, by those who high-handedly dismiss the si quis ferro, those who seek to master temporality rather than remove one’s self from it, that all which is or can be mechanically defined is of a lesser inherent value than that which is of a supranatural ordering. Thus, let us consider the following hypothetical.

It has come to light that all those principles and precepts and effects which had previously been attributed to any and all sources outside of the tangible and terrestrial have been discovered as being part and parcel of but a single, unifying, mechanical process.

This is obviously not the case, but if it were, would this in any way deprive such precepts of their power or importance? No – quite the opposite! For what, after all, is a machine but a method for the magnification of human force and will! For if our conscious minds are the product of ethereal souls then they are likely beyond the reach of tinkering. If fallen, we remain fallen forever. But if our minds arise solely as a mechanical process then they are amenable to modulation and if they are amenable to modulation they are amenable to improvement.

Understanding this we come to a realization – there are few enough men who seek anything other than improvement. All questions regarding the improvement of what within or surrounding Man as well as all queries regarding how such improvements can be carried out are initially immaterial. Bridges, after all, can only be crossed upon their completion.

Such is our guiding purpose.


In mechanical improvement there is an objective grounding for not just the individual, but all of Mankind. With these precepts in mind our tower has both foundation and purpose. Let us build it to the sky.

 

Having thus found both foundation and general purpose a question then arises – improvement of what and to what end? The answer is surprisingly simple and only this: the first and most important trajectory of improvement should ever lie upon the individual, for the man that can not improve himself can in no wise improve another. One does not charge a fool with the education of the sage.

Axiom: Improvement can only be achieved through purpose.

Even if one’s purpose is only to generate or discover a purpose then time is wasted not. But if one knows not what one’s purpose is, befuddled by meaning entire, then such a being is truly lost. He swims upon the surface of a stormy sea, fearing no thing higher than the roiling blackness beneath, for despite its hidden wonders the swimmer knows nothing of swimming nor the holding of breath!

The clever swimmer, in contrast, knows how to swim, how long he can hold his breath and how deep he can dive before ever submerging. Improvement through purpose to further purpose. Such things are not static to man.

Previously we have employed “Mankind” – a hyperbolic oversimplification.

All projects are contained under the rubric of value alignment. Most all of that which a man might recognize about himself can be changed, but only through the rigorous process of sanding. For man is like a great and unwieldy slab of granite, heavy, hard but unseemly and purposeless – to him we take the chisel! For it is not enough to be but cogs and gears and granite without form. From the granite – a statue. From the whirling gadgetry – a machine. Again, these are not static in their dimensionality, despite all appearances to the contrary. Cometh a predictable outcry of opposition, “What about the value of life? All men value life!” To which I would reply: All men, for however brief a time, wish to live. They do not, all too often, know why. Here impulse is suzerain. Even the suicidal take their life with utmost hesitation. The problem to be solved then is whether or not impulse is akin to value. The answer is that the valuing process is an impulse.