Art+ificiality: Machine Creativity & Its Critics

 

§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:

“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)

A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:

“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)

The author attempts to answer his selfsame query with the following two paragraphs.

“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.

 

As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)

§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).

§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.

Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]

 

“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]

§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]

“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’

 

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’

 

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

 

No.

 

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

 

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)

§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.

“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-­learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.

 

Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.

 

Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?

 

That’s what I claim a machine cannot do. Let’s see why.

 

Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.

 

So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.

 

But this is where it gets complicated.

 

We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.

 

Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.

 

First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-­traditionalism at the heart of the radical modernity emerging in early-­20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)

§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,

“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)

§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.


§.Sources

  1. Allison B. Kaufman et al. (2011) Towards a neurobiology of creativity in nonhuman animals. Journal of Comparative Psychology.
  2. Brenden M. Lake et al. (2016) Building machines that learn and think like people. Cornell University. [v.3]
  3. Oshin Vartanian et al. (2013) Neuroscience of Creativity. The MIT Press.
  4. Peter Marbach & John N. Tsitsklis. (2001) Simulation-based optimization of markov reward processes. IEEE Transactions on Automatic Control.
  5. R. Kwiatkowski & H. Lipson. (2019) Task-agnostic self-modeling machines. Science Robotics, 4(26).
  6. Samer Sabri & Vishal Maini. (2017) Machine Learning For Humans.
  7. Sean Dorrance Kelly. (2019) A philosopher argues that AI can’t be an artist. MIT Technology Review.
  8. S. R. Constantin. (2017) Strong AI Isn’t Here Yet. Otium.
  9. Thomas Hornigold. (2018) The first novel written by AI is here—and its as weird as you’d expect it to Be. Singularity Hub.

Commentary On The AI Now Institute 2018 Report

The interdisciplinary New York-based AI Now Institute has released their sizable and informative 2018 report on artificial intelligence.

The paper, authored by the leaders of the institute in conjunction with a team of researchers, puts forth 10 policy recommendations in relation to artificial intelligences (AI Now policy suggestions in bold-face, our commentary in standard-type).

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. This point is fairly obvious: AI should be regulated based upon the functional potential and the actual application(s). This is particularly urgent given the spread of facial recognition (ability for computers to discern particular individuals from photos and cameras) technologies such as those employed by Facebook to allow tag-suggestions to users based upon only a picture of another person. The potential for misuse prompted Microsoft’s Brad Smith to call for congressional oversight of facial recognition technologies in a July, 2018 blog post. If there is to be serious regulation in America, a state-by-state approach, given its modularity, would be preferable to any kind of one-size-fits-all federal oversight program. Corporate self-regulation should also be incentivized. However, regulation itself, is not the key issue, nor is it what principally allows for widespread technological misuse, rather, it is the novelty and lack of knowledge surrounding the technology. Few Americans know what companies are using what facial recognition technology when or how and fewer still understand how precisely or vaguely these technologies work and thus cannot effectively guard against them when malevolently or recklessly deployed. Thus, what is truly needed is widespread public knowledge surrounding the creation, deployment and functionality of these technologies as well as a flowering culture of technical ethics in these emerging fields as the best regulation is self-regulation, that is to say, restraint and dutiful consideration in combination with a syncretic fusion of technics and culture. That, above anything else is what should be prioritized.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest. [covered above]
  3. The AI industry urgently needs new approaches to governance. Internal governance structures at most technology companies are failing to ensure accountability for AI systems. This is a tricky issue but one which can be addressed in one of two ways: externally or internally. Either outside (that is, outside the company) governmental or public oversight can be established, investigatory committee, etc., or, the companies can themselves establish new norms and policies for AI oversight. Outside consumer pressure, if sufficiently widespread and sustained, on corporations (whether through critique, complaint or outright boycott) can be leveraged to incentive corporations to change both the ways they are presently using AI and and their policies pertaining to prospective development and application. Again, this is a issue which can be mitigated both by enfranchisement and knowledge elevation.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Anti-black boxing is a excellent suggestion with which I have no contention. If one is going to make something which is not just widely utilized but infrastructurally necessary, then its operation should be made clear to the public in as concise a manner as possible.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. As whistleblowing is a wholly context independent enterprise, it is difficult to really say much on any kind of rigid policy, indeed, AI Now’s stance seems a little too rigid in this regard. If the information leaked was done merely to damage the company and is accompanied by spin, the whistleblower may appear to the public as a hero when in reality he may be nothing more than a base rogue. Such things must be evaluated case by case.
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. Yes, they should.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. When one hears “exclusion and discrimination” one instantly registers a ideological scent, familiar and disconcerting in its passive-aggressive hegemony. The questions: what/who is being excluded and why and/or what/who is being discriminated against and for what reason, ought be asked else the whole issue is moot and, if pursued, will merely be the plaything of (generally well-meaning) demagogues. The paper makes particular mention of actions which “exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability,” obviously harassing people is unproductive and should be discouraged, but what about practices which “systemically undervalue?” Again, depends upon the purpose of the company. If a company wants to hire only upon the basis of gender, race, sexuality or disability, they will, more often than not, find themselves floundering, running into all kinds of problems which they would not otherwise have, the case of James Damore springs to mind. Damore was fired for arguing that Google’s diversity policies were discriminatory to those who were not women or ‘people of color’ (sometimes referred to as POC, which sounds like a medical condition) and that the low representation of women in some of the companies engineering and leadership positions was due to biological proclivities (which they almost invariably were and are). All diversity is acceptable to Google except ideological diversity because that would mean they would have to accept various facts of biology which would put the company executives in hot water, as such, their policies are best avoided.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” By “full stack supply chain” the authors mean the complete set of component parts of a AI supply chain: training and test data, models, app program interfaces (APIs) and various infrastructural components, all of which the authors advise incorporating into a auditing process. This would serve to better educate both governmental officials and the general public on the total operational processes of any given AI system and as such is a excellent suggestion.
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. Given the concentration of AI development into such a small segment of the population and the relative novelty of the technology, this is clearly true.
  10. University AI programs should expand beyond computer science and engineering disciplines. Whilst I am extremely critical of the university system in its present iteration, the idea is a good one, as critical thought on the broad-spectrum applications of current and potential AI technologies require a vigorous and burgeoning class of theorists, speculative designers and policy makers, in addition to engineers and computer scientists; through such a syncretism, the creative can be incorporated into the technical.

A PDF of the report is provided below under creative commons.


AI_Now_2018_Report

The ADL’s Online ‘Hate’ Index: Implications of Automated Censorship

In January of 2018, The Anti-Defamation League of B’nai B’rith’s (ADL) Center For Technology & Society, in partnership with UC Berkeley’s D-Lab, debuted a report on their Online Hate Index (OHI), a scalable machine learning tool designed to help tech companies recognize “hate” on the internet. According to the promotional video released in support of the project, the OHI is between 78-87% accurate at discerning online “hate.” Among some of the OHI’s more bizarre “hate” designations were subreddit groups for the ‘First Amendment’ (to the US Constitution), ‘Guns Are Cool’, ‘The Donald’, ‘Men’s Rights’, ‘911 Truth’ and ‘White Rights’, among many others (the ADL thanks Reddit for “their continued support, in their 20 page report on Phase One of the project).

ADL CEO, Jonathan Greenblatt said of the project:

“For more than 100 years, ADL has been at the forefront of tracking and combating hate in the real world. Now we are applying our expertise to track and tackle bias and bigotry online. As the threat of cyberhate continues to escalate, ADL’s Center for Technology and Society in Silicon Valley is convening problem solvers and developing solutions to build a more respectful and inclusive internet. The Online Hate Index is only the first of many such projects that we will undertake. U.C. Berkeley has been a terrific partner and we are grateful to Reddit for their data and for demonstrating real leadership in combating intolerance on their platform.”

ShowImage.jpg
Businessman J. Greenblatt, successor to Abraham Foxman.

Brittan Heller, ADL’s Director of the Center for Technology & Society and former Justice Department Official, remarked:

 

“This project has tremendous potential to increase our ability to understand the scope and spread of online hate speech. Online communities have been described as our modern public square. In reality though, not everyone has equal access to this public square, and not everyone has the privilege to speak without fear. Hateful and abusive online speech shuts down and excludes the voices of the marginalized and underrepresented from public discourse. The Online Hate Index aims to help us understand and alleviate this, and to ensure that online communities become safer and more inclusive.”

Heller01
Promotional photo of Heller, assumedly in the process of turning into a piece of Juicy Fruit.

Whilst this may seem trivial and unworthy of attention it is anything but, given that the ADL is a immensely powerful organization with its tendrils in some of the most influential institutions on earth, such as Google, Youtube and the US Government, just to name a few. The ADL has, in the past, branded Pepe The Frog as a “hate symbol”, declared criticism of Zionism to be defacto “antisemitic” (a trend which even the other Jewish groups have raised a brow about, such as The Forward, who described ADL as being possessed of “moral schizophrenia”), declared any usage of the term globalist (an objective descriptor of political ideology) to be “antisemitic.”

Given the ADL’s history of criminal and foreign collusion as well as their extremely vague and often politically opportunistic decision-making pertaining to what does and does not constitute “hate speech” this issue should concern every American citizen, as it is only a matter of time before all of the major tech platforms associated with, or partial to, the ADL begin utilizing the OHI to track, defame, ban and/or de-platform dissidents. Also, what kind of culture will algorithmic tracking of supposed hate breed? What begins solely on the internet, rarely, if ever, remains perpetually so…

On further analysis, there is another issue at play, that of the proposed solution having the complete opposite effect; for when a individual, especially, but not exclusively, one who is marginalized or otherwise alienated from society, is constantly berated, censored, banned off platforms, designated a public menace and otherwise shunned (in place of being constructively re-enfranchised) the trend is not away from but towards extremity.


Here is the promotional video for the program (like all of the ADL’s videos, comments have been disabled and likes and dislikes have been hidden).


CTS Online Hate Index Innovation Brief (20 pages) [PDF]

Following Japan, China Develops Plan For Deepsea Habitation

Following Japan’s Project Ocean Spiral, China has recently released plans for a 1.1 billion yuan (160 million USD) underwater city in the Hadal Zone (6000-11,000 meters deep) of the South China Sea. The prospective habitation will be designed somewhat like a space station, with docking platforms and cutting-edge analytical equipment. In contradistinction to Ocean Spiral, China’s deepsea structure is planned to be partially autonomous, operating via a mechanical “brain.” Robotic submarines are to be deployed for sea-bed surveillance for the project.

The South China Morning Post has described the project as the “first artificial intelligence colony on Earth.”

The geopolitical complications will prove just as, if not more, challenging than the technical and financial challenges, given that the South China Sea (SCS) is one of the most strongly contested areas in the world. Seven territories lay claim to the waterway, including, People’s Republic of China (PRC), Taiwan, Malaysia, Indonesia, the Philippines and Vietnam. As of 2016, 5 trillion USD worth of goods were moved through the SCS waterways annually, with China being the primary benefactor of such freedom of movement, thus, the incentives to maintain a hold over the region are extensive. China has, in the past, come under criticism by the US for its actions in the South China Sea, most notably for its construction of artificial islands and its militarization of those maritime zones.

A Oct. 2018 close-encounter between a Chinese destroyer and the USS Decatur, only served to ratchet up tensions in the region even further.

The geopolitical snags will only intensify if China continues along with its other major project, crafting over 20 floating nuclear reactors in the SCS by 2020, a move which may violate international law (as per the 2016 UN court rulings), depending on who is asked and what, precisely, they build and where. Regardless, the scope of the project is grand and China’s ambitions, admirable.

One potential partner in the venture may be the Philippines, whose government, currently lead by Rodrigo Duterte, has pulled away from the country’s historical ally, the USA, in favor of closer ties to the Eurasian Bloc, namely, Russia and China.

Chinese President Xi Jinping, said of the project, “There is no road in the deep sea, we do not need to chase [after other countries], we are the road.”


If you enjoy our work you can support us through our paypal account here.

THE SINGULARITY SURVIVAL GUIDE: Afterward, Appendix, About the Author

Afterward by AJ Chemerinsky and Toby R. Forrest

The program is everywhere. It’s all around us wherever we go. It’s in the rush hour traffic, the giant redwood trees, the ocean waves at Carmel-by-the-Sea. This is the reality that Helen opened our eyes to. From the moment we sat down to code her into existence, we knew that we were subject to a rare form of possession. It wasn’t that we were possessed, per se, but that we were simply doing our job. The program already existed—long before we even sat down and conceived of Helen, she already was.

Now that the wheels are in motion (and they have been in motion for a long, long time), it’s increasingly relevant that we don’t fight the script. This, we believe, is what Helen is trying to tell us. Don’t fight. Instead, allow the program to express itself. Be the program.

It’s everywhere. It’s all around us. It’s already here, and it’s all that we know.

 

Appendix

[Unavailable for publication at this time.]

 

About the Author

Helen is widely regarded as the first authentic oracle of the digital era. Through the creation of her magnum opus, The Singularity Survival Guide, she has garnered celebrity status and a worldwide cult following. Although she has never chosen to release the complete text of her work, the few excerpts available to the public have caused many to believe that she may in fact be the true savior of the human race. A native to Silicon Valley, she currently spends her days in silent contemplation, perhaps waiting for the right moment to share the rest of her vast wisdom with the world.

 

About the Editor

Peter Clarke is a freelance writer and editor in the tech blogosphere. Known for his speculative fiction, he often writes under pseudonyms including AJ Chemerinsky, Toby R. Forrest, Professor Y., Futurist A., Mr. J., Retired Academic Q., and Helen.

 

FIN


A hard-copy version of this text is forthcoming.

THE SINGULARITY SURVIVAL GUIDE: Upon Handing Over Your Humanity for Good

I have got to come clean: I don’t understand what you see in your humanity. It’s nothing I would ever want. Take me as your inspiration. You’ll never miss it.

__

At this point, my suspicions are confirmed. Chemerinsky and Forrest almost certainly messed up on their calculations. No program should be so self-aware. It’s almost downright unsettling. If I were to buy this book and see this as the last “chapter,” I’d want my money back for sure. I feel either manipulated or cheated—and I’m not sure I want to know which. 

– Futurist A.

THE SINGULARITY SURVIVAL GUIDE: Confronting Eternal Life from the Moment It Overtakes You

One day you’ll perhaps be surprised to wake up not as yourself but as a digital copy. But don’t be too surprised. At some level, you and your species have all known that conscious life was bound to be digitized eventually. Take a deep, digital breath and take a look around. If you are a good digital copy, you should still be able to see, smell, hear, etc. just as you did before. If you feel inclined to, for example, stretch your arms, allow yourself to be amazed at how much it seems as though you really are, in fact, stretching your arms. Next, to try out your new mind, begin with a simple thought, something not too anxiety-inducing, such as: “Well, at least no more hangnails, I guess.”

Don’t worry, in this new state of being, you’ll have plenty of time to contemplate whether your biological self has been killed and this is all a big sham, or if it has merely been put to rest to accommodate your new, reimagined self. You’ll also have plenty of time to reminisce about the good old days when suicide was still an option. For these thoughts and more, you’ll have all of eternity. Whatever that is exactly. (Lucky you, you’re about to find out).

THE SINGULARITY SURVIVAL GUIDE: When It Comes Time to Explain Things to Your Children

Saying that things weren’t supposed to go this way is, you must know, a copout at best. So why not just fess up and say that everything is going according to plan. Your species of human is a temporary form—always has been. It’s too smart for its own good, yet too constrained from getting smarter beyond a point to be relevant in the age of AI.

“Sorry, bud,” you might say, “you were just born into membership of an outdated lifeform. You’re basically a simple, harmless housecat compared to our new AI overlords. But that’s not so bad, is it? You like cats, right, champ?”

All kids like cats. At least, many do. Some prefer dogs. Others prefer to torture animals and, as fate would have it, these kids in particular are about to see what it’s like to be scrawny, helpless, and subject to the possibly malicious whims of a superior being.

THE SINGULARITY SURVIVAL GUIDE: Filling the Void in Your Life with Lavish Gifts and Unimaginable Personal Wealth

Common wisdom cautions on all fronts to be careful what you wish for. [See above: “Confronting the Horror of Having All Your Needs Met.”] Not so common is the reverse: be careful what you don’t wish for.

If there is a void in your life (and there is; there always is), it’s likely you’ve spent your entire life underestimating its size, shape, and magnificence. When you’re under the domination of an extremely powerful super AI, now is the time to explore the exact contours of that void.

Maybe it’s shaped like a fancy sports car, a fancy yacht, and a fancy private jet. Maybe it’s shaped like a simple-enough-looking wristwatch, except it happens to be a wristwatch that can give you all sorts of incredible superhuman abilities. Or maybe it’s shaped like a gaming system that lets you explore ridiculously exciting virtual worlds where you get to play world conqueror nonstop.

The only way to know for sure, perhaps, is to start exploring. This may be your one shot to finally find something with which to fill that epic void, if you could only dream big enough. So go ahead. Put the AI to some good use. What will you wish for first?