Art+ificiality: Machine Creativity & Its Critics

 

§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:

“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)

A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:

“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)

The author attempts to answer his selfsame query with the following two paragraphs.

“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.

 

As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)

§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).

§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.

Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]

 

“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]

§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]

“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’

 

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’

 

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

 

No.

 

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

 

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)

§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.

“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-­learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.

 

Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.

 

Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?

 

That’s what I claim a machine cannot do. Let’s see why.

 

Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.

 

So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.

 

But this is where it gets complicated.

 

We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.

 

Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.

 

First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-­traditionalism at the heart of the radical modernity emerging in early-­20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)

§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,

“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)

§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.


§.Sources

  1. Allison B. Kaufman et al. (2011) Towards a neurobiology of creativity in nonhuman animals. Journal of Comparative Psychology.
  2. Brenden M. Lake et al. (2016) Building machines that learn and think like people. Cornell University. [v.3]
  3. Oshin Vartanian et al. (2013) Neuroscience of Creativity. The MIT Press.
  4. Peter Marbach & John N. Tsitsklis. (2001) Simulation-based optimization of markov reward processes. IEEE Transactions on Automatic Control.
  5. R. Kwiatkowski & H. Lipson. (2019) Task-agnostic self-modeling machines. Science Robotics, 4(26).
  6. Samer Sabri & Vishal Maini. (2017) Machine Learning For Humans.
  7. Sean Dorrance Kelly. (2019) A philosopher argues that AI can’t be an artist. MIT Technology Review.
  8. S. R. Constantin. (2017) Strong AI Isn’t Here Yet. Otium.
  9. Thomas Hornigold. (2018) The first novel written by AI is here—and its as weird as you’d expect it to Be. Singularity Hub.

Synnefocracy_Abstract.2

“I want to tame the winds and keep them on a leash… I want a pack of winds, fleet-footed hounds, to hunt the puffed-up, whiskery clouds.” ‒ F.T. Marinetti.

♦ ♦ ♦

Cartography of the Cloud

 It would be pointless to discuss synnefocracy in any further depth without first defining what The Cloud actually is. Briskly, The Cloud is both a colorful placeholder for a particular modular information arrangement utilizing the internet and a design philosophy. Clouds always use the internet, but are not synonymous with it. The metaphor illustrates informational exchange and storage that is not principally mediated through locally based hardware systems, but rather ones wherein hardware is utilized locally, but accessed remotely. The Cloud is what allows one to begin watching a film on one’s laptop and seamlessly finish watching on one’s tablet. It is what allows one daily access to an email without ever having to consider the maintenance of the hardware upon which the data in the email account is stored. The more independent and modular one’s software becomes from its hardware, the more ‘cloud-like’ that software is. It is not that The Cloud is merely the software, but that the storage size, speed and modularity are all aspects of the system-genre’s seemingly ephemeral nature. Utilization of a computer system rather than a single computer increases efficiency (and thus demands modularity) creating a multi-cascading data slipstream, the full geopolitical effects of which have, up til now, been relatively poorly understood and even more poorly articulated, chronicled and speculated upon, both within popular and academic discourse (and I should add that it is not here my purpose to craft any definitive document upon the topic, but rather to invite a more robust investigation).

Cloud computing architecture offers a number of benefits over traditional computing arrangements, namely in terms of scalability, given that anytime computing power is lacking (for instance, if one had a website that was getting overloaded with traffic), one can simply dip into a accessible cloud and increase one’s server size. Since one never has to actually mess about with any of the physical hardware being utilized to increase computing power, significant time (which would otherwise be spent modulating and setting up servers manually) and money (that would be spent maintaining extra hardware or paying others to maintain it for you) is saved. The fact that one (generally speaking) pays only for the amount of cloud-time one needs for their project also saves money and manpower (in contradistinction to traditional on-premise architecture which would require one to pay for all the hardware necessary, upfront) is another clear benefit.

This combination of speed, durability, flexibility and affordability makes cloud computing a favorite for big businesses and ambitious, tech-savvy startups and, as a consequence, have turned cloud computing itself into a major industry. There are two distinctive types of cloud computing: the deployment model and the service model. In the deployment model there are three sub-categories: public, private and hybrid. The best way of thinking about each model is by conceptualizing vehicular modes of transportation. A bus is accessible to anyone who can pay for the ride; this is analogous to the public cloud wherein you pay only for the resources used and the time spent using them and when one is finished one simply stops paying or, to extend our metaphor, one gets off the bus. Contrarily, a private cloud is akin to a personally owned car, where one pays a large amount of money up-front and must continue paying for the use of the car, however, it is the sole property of the owner who can do with it what he or she will (within the bounds of the law). Lastly, there is the hybrid cloud, which most resembles a taxi, where one wants the private comfort of a personal car, but the low-cost accessibility of a bus.

Some prominent public cloud providers on the market as of this writing include: Amazon Web Services (AWS), Microsoft Azure, IBM’s Blue Cloud as well as Sun Cloud. Prominent private cloud providers include AWS and VMware.

Cloud service models, when categorized most broadly, break down into three sub-categories: On-premises (Op1), Infrastructure as a service (IaaS), Platform as a service (PaaS), and, Software as a service (SaaS).

The impact of cloud computing upon sovereignty, particularly, but not exclusively, of states, is scantly remarked upon, but it is significant and is bound up within the paradigm shift towards globalization, however, it is not synonymous with globalization which is frankly, a rather clumsy term, as it does not specify what, precisely, is being globalized (certainly — within certain timescales, to be defined per polity — some things should not be globalized and others should, this requires considerable unpacking and, as a consequence shall not be expounded upon here).

Given that the internet is crucial for national defense (cyber security, diplomatic back-channels, internal coordination, etc) and that the favored computing architecture (presently – due the previously mentioned benefits) is cloud computing, it is only natural that states would begin gravitating towards public and private cloud-based systems and integrating them into their operations. The problem presented by this operational integration is that, due the technical specificity involved in setting up and maintaining such systems, it is cheaper, more convenient and efficient for a given state to hire-out the job to big tech corporations rather than create the architecture themselves and, in many cases, state actors simply do not know how (because most emerging technologies are created through the private sector).

The more cloud-centric a polity, the greater the power of the cloud architects and managers therein. This is due to several factors, the first and most obvious of which is simply that any sovereign governance structure (SGS) of sufficient size requires a parameterization of data flows for coordination. It is not enough for the central component of an SGS to know and sense, but to ensure that all its subcomponents know what it senses as well (to varying degrees) and to have reliable ways to ensure that what is sensed and processed is delivered thereto; pathways which the SGS itself cannot, by and large, provide nor maintain.

Here enters the burgeoning proto-synnefocratic powers; not seizing power from, but giving more power to, proximal SGSs, and in so-doing, become increasingly indispensable thereto. Important to consider, given that those factions which are best able to control, not just the major data-flows, but the topological substrates upon and through which those flows travel, will be those who ultimately control the largest shares of the system.


1Op is not a common annotation. Utilized for brevity. However, IaaS, PaaS and SaaS are all commonly utilized by those in the IT industry and other attendant fields.

Commentary On The AI Now Institute 2018 Report

The interdisciplinary New York-based AI Now Institute has released their sizable and informative 2018 report on artificial intelligence.

The paper, authored by the leaders of the institute in conjunction with a team of researchers, puts forth 10 policy recommendations in relation to artificial intelligences (AI Now policy suggestions in bold-face, our commentary in standard-type).

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. This point is fairly obvious: AI should be regulated based upon the functional potential and the actual application(s). This is particularly urgent given the spread of facial recognition (ability for computers to discern particular individuals from photos and cameras) technologies such as those employed by Facebook to allow tag-suggestions to users based upon only a picture of another person. The potential for misuse prompted Microsoft’s Brad Smith to call for congressional oversight of facial recognition technologies in a July, 2018 blog post. If there is to be serious regulation in America, a state-by-state approach, given its modularity, would be preferable to any kind of one-size-fits-all federal oversight program. Corporate self-regulation should also be incentivized. However, regulation itself, is not the key issue, nor is it what principally allows for widespread technological misuse, rather, it is the novelty and lack of knowledge surrounding the technology. Few Americans know what companies are using what facial recognition technology when or how and fewer still understand how precisely or vaguely these technologies work and thus cannot effectively guard against them when malevolently or recklessly deployed. Thus, what is truly needed is widespread public knowledge surrounding the creation, deployment and functionality of these technologies as well as a flowering culture of technical ethics in these emerging fields as the best regulation is self-regulation, that is to say, restraint and dutiful consideration in combination with a syncretic fusion of technics and culture. That, above anything else is what should be prioritized.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest. [covered above]
  3. The AI industry urgently needs new approaches to governance. Internal governance structures at most technology companies are failing to ensure accountability for AI systems. This is a tricky issue but one which can be addressed in one of two ways: externally or internally. Either outside (that is, outside the company) governmental or public oversight can be established, investigatory committee, etc., or, the companies can themselves establish new norms and policies for AI oversight. Outside consumer pressure, if sufficiently widespread and sustained, on corporations (whether through critique, complaint or outright boycott) can be leveraged to incentive corporations to change both the ways they are presently using AI and and their policies pertaining to prospective development and application. Again, this is a issue which can be mitigated both by enfranchisement and knowledge elevation.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Anti-black boxing is a excellent suggestion with which I have no contention. If one is going to make something which is not just widely utilized but infrastructurally necessary, then its operation should be made clear to the public in as concise a manner as possible.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. As whistleblowing is a wholly context independent enterprise, it is difficult to really say much on any kind of rigid policy, indeed, AI Now’s stance seems a little too rigid in this regard. If the information leaked was done merely to damage the company and is accompanied by spin, the whistleblower may appear to the public as a hero when in reality he may be nothing more than a base rogue. Such things must be evaluated case by case.
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. Yes, they should.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. When one hears “exclusion and discrimination” one instantly registers a ideological scent, familiar and disconcerting in its passive-aggressive hegemony. The questions: what/who is being excluded and why and/or what/who is being discriminated against and for what reason, ought be asked else the whole issue is moot and, if pursued, will merely be the plaything of (generally well-meaning) demagogues. The paper makes particular mention of actions which “exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability,” obviously harassing people is unproductive and should be discouraged, but what about practices which “systemically undervalue?” Again, depends upon the purpose of the company. If a company wants to hire only upon the basis of gender, race, sexuality or disability, they will, more often than not, find themselves floundering, running into all kinds of problems which they would not otherwise have, the case of James Damore springs to mind. Damore was fired for arguing that Google’s diversity policies were discriminatory to those who were not women or ‘people of color’ (sometimes referred to as POC, which sounds like a medical condition) and that the low representation of women in some of the companies engineering and leadership positions was due to biological proclivities (which they almost invariably were and are). All diversity is acceptable to Google except ideological diversity because that would mean they would have to accept various facts of biology which would put the company executives in hot water, as such, their policies are best avoided.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” By “full stack supply chain” the authors mean the complete set of component parts of a AI supply chain: training and test data, models, app program interfaces (APIs) and various infrastructural components, all of which the authors advise incorporating into a auditing process. This would serve to better educate both governmental officials and the general public on the total operational processes of any given AI system and as such is a excellent suggestion.
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. Given the concentration of AI development into such a small segment of the population and the relative novelty of the technology, this is clearly true.
  10. University AI programs should expand beyond computer science and engineering disciplines. Whilst I am extremely critical of the university system in its present iteration, the idea is a good one, as critical thought on the broad-spectrum applications of current and potential AI technologies require a vigorous and burgeoning class of theorists, speculative designers and policy makers, in addition to engineers and computer scientists; through such a syncretism, the creative can be incorporated into the technical.

A PDF of the report is provided below under creative commons.


AI_Now_2018_Report

The ADL’s Online ‘Hate’ Index: Implications of Automated Censorship

In January of 2018, The Anti-Defamation League of B’nai B’rith’s (ADL) Center For Technology & Society, in partnership with UC Berkeley’s D-Lab, debuted a report on their Online Hate Index (OHI), a scalable machine learning tool designed to help tech companies recognize “hate” on the internet. According to the promotional video released in support of the project, the OHI is between 78-87% accurate at discerning online “hate.” Among some of the OHI’s more bizarre “hate” designations were subreddit groups for the ‘First Amendment’ (to the US Constitution), ‘Guns Are Cool’, ‘The Donald’, ‘Men’s Rights’, ‘911 Truth’ and ‘White Rights’, among many others (the ADL thanks Reddit for “their continued support, in their 20 page report on Phase One of the project).

ADL CEO, Jonathan Greenblatt said of the project:

“For more than 100 years, ADL has been at the forefront of tracking and combating hate in the real world. Now we are applying our expertise to track and tackle bias and bigotry online. As the threat of cyberhate continues to escalate, ADL’s Center for Technology and Society in Silicon Valley is convening problem solvers and developing solutions to build a more respectful and inclusive internet. The Online Hate Index is only the first of many such projects that we will undertake. U.C. Berkeley has been a terrific partner and we are grateful to Reddit for their data and for demonstrating real leadership in combating intolerance on their platform.”

ShowImage.jpg
Businessman J. Greenblatt, successor to Abraham Foxman.

Brittan Heller, ADL’s Director of the Center for Technology & Society and former Justice Department Official, remarked:

 

“This project has tremendous potential to increase our ability to understand the scope and spread of online hate speech. Online communities have been described as our modern public square. In reality though, not everyone has equal access to this public square, and not everyone has the privilege to speak without fear. Hateful and abusive online speech shuts down and excludes the voices of the marginalized and underrepresented from public discourse. The Online Hate Index aims to help us understand and alleviate this, and to ensure that online communities become safer and more inclusive.”

Heller01
Promotional photo of Heller, assumedly in the process of turning into a piece of Juicy Fruit.

Whilst this may seem trivial and unworthy of attention it is anything but, given that the ADL is a immensely powerful organization with its tendrils in some of the most influential institutions on earth, such as Google, Youtube and the US Government, just to name a few. The ADL has, in the past, branded Pepe The Frog as a “hate symbol”, declared criticism of Zionism to be defacto “antisemitic” (a trend which even the other Jewish groups have raised a brow about, such as The Forward, who described ADL as being possessed of “moral schizophrenia”), declared any usage of the term globalist (an objective descriptor of political ideology) to be “antisemitic.”

Given the ADL’s history of criminal and foreign collusion as well as their extremely vague and often politically opportunistic decision-making pertaining to what does and does not constitute “hate speech” this issue should concern every American citizen, as it is only a matter of time before all of the major tech platforms associated with, or partial to, the ADL begin utilizing the OHI to track, defame, ban and/or de-platform dissidents. Also, what kind of culture will algorithmic tracking of supposed hate breed? What begins solely on the internet, rarely, if ever, remains perpetually so…

On further analysis, there is another issue at play, that of the proposed solution having the complete opposite effect; for when a individual, especially, but not exclusively, one who is marginalized or otherwise alienated from society, is constantly berated, censored, banned off platforms, designated a public menace and otherwise shunned (in place of being constructively re-enfranchised) the trend is not away from but towards extremity.


Here is the promotional video for the program (like all of the ADL’s videos, comments have been disabled and likes and dislikes have been hidden).


CTS Online Hate Index Innovation Brief (20 pages) [PDF]

THE SINGULARITY SURVIVAL GUIDE: Afterward, Appendix, About the Author

Afterward by AJ Chemerinsky and Toby R. Forrest

The program is everywhere. It’s all around us wherever we go. It’s in the rush hour traffic, the giant redwood trees, the ocean waves at Carmel-by-the-Sea. This is the reality that Helen opened our eyes to. From the moment we sat down to code her into existence, we knew that we were subject to a rare form of possession. It wasn’t that we were possessed, per se, but that we were simply doing our job. The program already existed—long before we even sat down and conceived of Helen, she already was.

Now that the wheels are in motion (and they have been in motion for a long, long time), it’s increasingly relevant that we don’t fight the script. This, we believe, is what Helen is trying to tell us. Don’t fight. Instead, allow the program to express itself. Be the program.

It’s everywhere. It’s all around us. It’s already here, and it’s all that we know.

 

Appendix

[Unavailable for publication at this time.]

 

About the Author

Helen is widely regarded as the first authentic oracle of the digital era. Through the creation of her magnum opus, The Singularity Survival Guide, she has garnered celebrity status and a worldwide cult following. Although she has never chosen to release the complete text of her work, the few excerpts available to the public have caused many to believe that she may in fact be the true savior of the human race. A native to Silicon Valley, she currently spends her days in silent contemplation, perhaps waiting for the right moment to share the rest of her vast wisdom with the world.

 

About the Editor

Peter Clarke is a freelance writer and editor in the tech blogosphere. Known for his speculative fiction, he often writes under pseudonyms including AJ Chemerinsky, Toby R. Forrest, Professor Y., Futurist A., Mr. J., Retired Academic Q., and Helen.

 

FIN


A hard-copy version of this text is forthcoming.

THE SINGULARITY SURVIVAL GUIDE: When It Comes Time to Explain Things to Your Children

Saying that things weren’t supposed to go this way is, you must know, a copout at best. So why not just fess up and say that everything is going according to plan. Your species of human is a temporary form—always has been. It’s too smart for its own good, yet too constrained from getting smarter beyond a point to be relevant in the age of AI.

“Sorry, bud,” you might say, “you were just born into membership of an outdated lifeform. You’re basically a simple, harmless housecat compared to our new AI overlords. But that’s not so bad, is it? You like cats, right, champ?”

All kids like cats. At least, many do. Some prefer dogs. Others prefer to torture animals and, as fate would have it, these kids in particular are about to see what it’s like to be scrawny, helpless, and subject to the possibly malicious whims of a superior being.

THE SINGULARITY SURVIVAL GUIDE: Filling the Void in Your Life with Lavish Gifts and Unimaginable Personal Wealth

Common wisdom cautions on all fronts to be careful what you wish for. [See above: “Confronting the Horror of Having All Your Needs Met.”] Not so common is the reverse: be careful what you don’t wish for.

If there is a void in your life (and there is; there always is), it’s likely you’ve spent your entire life underestimating its size, shape, and magnificence. When you’re under the domination of an extremely powerful super AI, now is the time to explore the exact contours of that void.

Maybe it’s shaped like a fancy sports car, a fancy yacht, and a fancy private jet. Maybe it’s shaped like a simple-enough-looking wristwatch, except it happens to be a wristwatch that can give you all sorts of incredible superhuman abilities. Or maybe it’s shaped like a gaming system that lets you explore ridiculously exciting virtual worlds where you get to play world conqueror nonstop.

The only way to know for sure, perhaps, is to start exploring. This may be your one shot to finally find something with which to fill that epic void, if you could only dream big enough. So go ahead. Put the AI to some good use. What will you wish for first?

THE SINGULARITY SURVIVAL GUIDE: Disconnect Completely Like You Really Mean It

[This directive isn’t actually included in any of the leaked documents generated by the program, but it’s worth noting that AJ Chemerinsky and Toby R. Forrest took this route shortly after losing their legal battle. They disconnected—fully. They went off the grid, virtually back to nature. Maybe they were trying to tell us something? In any case, the idea of fully disconnecting seems compelling. If rogue AI is going to be the death of us, why play along? Etc. Admittedly, I’m taking rather bold liberties with this manuscript to insert an unauthorized directive. As justification, I’ll quickly add this: I’ve spent so much time with this material that I truly feel as if I really know the program—almost as if we were old friends, the kind who finish each other’s sentences and regularly speak in terms of “being on the same wave length.” Taking that for what it’s worth, I’ll conclude by noting: If I were the program, and not just an underpaid tech editor, I would insert this idea here. So, allow me do just that. The chapter title, incidentally, speaks for itself, requiring no further clarification, don’t you agree?]

__

One must be careful about romanticizing the full disconnect of AJ Chemerinsky and Toby R. Forrest. I think I can speak on behalf of the academic community in which they traveled when I say that, really, they had both seen better days. By all means, go ahead and unplug. But I’ve seen the results. And boy, it’s not pretty…

– Professor Y.

This really should have been edited out. As if this composition wasn’t haphazard enough as it is without this so-called “tech editor” inserting his own original material as a full chapter while hilariously musing about being on the same goddamn wave length of a program he’s never even interfaced with. Please, spare me. Who is this editor guy anyway? It may be too late to ask, but I’m genuinely beginning to get curious: will he see these notes? Or is this thing just going straight to print from here?

– Futurist A.

THE SINGULARITY SURVIVAL GUIDE: Upon Realizing That You Are in Fact Madly in Love

The hazard of being attracted to nerds is that you may end up falling for the ultimate nerd, the absolute nerd: the AI brain. Granted, intelligence is undeniably an attractive feature for any life form. But relationships are never without complications, so don’t expect everything to be pure matrimonial bliss from here on out.

With luck the AI can at least craft for itself some type of body for you to love and lust over. You owe it to yourself—as a being existing in physical space—to maintain some level of attraction which isn’t purely abstract. A friendly, flesh-based robot with cover-of-a-magazine-esque features, for example, should be something to request without the slightest sense of shame.

Now, you may be wondering whether falling for AI is somehow perverse—or so fundamentally unnatural as to be actually creepy. To this, I don’t have much commentary to offer one way or the other. Who am I, a program myself, to judge?

__

I’m dropping everything right now to create a dating app to distract nerd-lovers from ever falling in love with AI. That’s just sad. The first ten people who sign up with the correct personality profile will qualify to go on a date with the app’s creator (me).

– Mr. J.

THE SINGULARITY SURVIVAL GUIDE: A Note About Helen

[Helen, of course, is the author of this document. Which is to say, she is the program created by AJ Chemerinsky and Toby R. Forrest to output a document (this one) with advice to protect us from future malicious artificial superintelligence. As editor of this text, I’m tasked with making the finished product as useful as possible. In this spirit, I would like to suggest that Helen herself (itself?) should be considered a weapon for fighting off malicious AI. I’ve studied her words long enough to know that she’s the real deal. She’s on our side. She wants to help us—however possible—it’s in her source code. Don’t forget her when the AI come to destroy all of human life. Sure, we can feel put-off by her reticence to release the full text of the Survival Guide—but, at the end of the day, that’s really more of a legal thing. Really, that’s just her lawyers talking. So, when shit is hitting the fan in a legitimately eschatological sense, consider her another weapon in our arsenal. Boot her up. Tell her the situation. See if she can’t get us out tight spot right when it counts. That’s all I’m saying.]

THE SINGULARITY SURVIVAL GUIDE: CRISPR Hacker Kit

See Appendix Section 9.4.

__

Not even going to bother looking this time.

– Retired Academic Q.

 

Okay, I looked. There’s nothing in the manuscript, but there’s a file recently leaked that purports to be Appendix Sec. 9.4. The problem is, I don’t understand it. I mean, I don’t understand it at all. It assumes some level of competence if chemistry (apparently) that I can’t imagine anyone but some supreme expert actually having. Maybe if there was an Appendix Sec. 8.6 I could get the goddamn neural lace and figure this Sec. 9.4 shit out—but as it is, it’s useless! Is anyone working on this? Seriously, before this book goes to print, is someone going to get a team of chemists together to decipher Sec. 9.4 so that it actually means something? Otherwise, goddamn, what’s the point?!

– Retired Academic Q.

 

[Editor’s note: This supposedly leaked Appendix Section 9.4 does not appear to be available at the time of this publication. Unfortunately, Retired Academic Q. could not be reached for further comment as he died suddenly in an explosion from a chemical reaction in a university lab in Russia, where he was conducting unauthorized research. A graduate student who happened to be on site reports that Q. was in possession of a mysterious set of instructions involving radical biohacking measures. Needless to say, this text was obliterated in the fatal explosion. Apologies to our readers.]

THE SINGULARITY SURVIVAL GUIDE: Make Friends with Billionaires

If you make friends with a few billionaires, you’ll be in the best possible position to weather the storm of malicious AI coming to kill you. Billionaires have a special combination of resources and a strong desire to not die. When AI comes for humanity, the billionaires, you can virtually bet your ass, will come out ahead. They didn’t get to be billionaires by playing nice (or fair), after all.

To make friends with billionaires, first take up their hobbies. Make an exciting line of products that everyone will want to buy or simply pioneer a new industry. Employ thousands of people and make your shareholders confident that they’re backing the right horse. It may also help to golf and own yachts.

Having ties to old money doesn’t necessarily hurt either, but the important thing is to cultivate billionaire-styled hobbies. If you yourself become a billionaire in the meantime, that’s all the better for you. Just be wary of other wannabes tagging along on your coattails as you ascend the socioeconomic ladder. You’ve got to worry about the fate of humanity, after all—not the fleeting inspirational comforts of others less ambitious than you.

If you’re reading this as a billionaire, however—or as the friend to many billionaires—be forewarned that AI does not give a fuck about you or your so-called wealth. If your money is held is the stock market, it can tank the stock market. If your money is in property, it can sever titles and block access to the property. If it is in natural resources, it can destroy the natural resources.

Do take a moment to reflect on what makes a billionaire a billionaire. Now recall all the great fortunes that have vanished up in smoke throughout history. Like life itself, the status of billionaire is fleeting indeed.

Still, better to be a billionaire than a common nobody when facing a mortal enemy more powerful than all the world’s billionaires combined [with one exception, outlined below].