Synnefo—which focuses on research and design in interstellar development—is now live.
The first article from the site—on future space policy—is available here.
You can follow Synnefo on twitter here.
In 1967, French essayist and literary hypothesizer Roland G. Barthes¹ published The Death of the Author (La mort de l’auteur), which takes, as its central aim, the delegitimization of “the Author” (capitalization his) for the prioritization of “the reader,” a point arrived at by Barthes’ through his belief that textual signification reached a point of coalescence only through the reader, and not through the author (even though they are also readers).
The essay’s title (not so much its contents) has become a popular slogan, deployed in literary circles as de facto justification for narcissistic misinterpretation. Under its auspices, if an author(s) expresses something, directly or through their work(s), which a member of their readership dislikes, then, under the auspices of the “death of the author” that reader may declare the author(s) intention(s) void, henceforth, and instead, declare their own “reinterpretation(s)” the valid one(s), even if the passage(s) is/are clear in intent, or, is/are clarified directly by the author(s).
When one declares biographical information and authorial intention irrelevant to a textual fictive work, it is pertinent to ask why such persons read a particular author? Within the framework of “death of the author” theology, they’re all the same, merely another jumble of text which can be extrapolated any which way one pleases. Indeed, one can rightly ask: why read any fiction at all? Why not read a instruction manual or a signpost, which can then be “reinterpreted” so as to make it amenable to the reader’s hermetic and fragile sensibilities?
In the observation of this practice, what one witnesses is not merely the death of authorial intent, but the death of art as a distinctive practice, for art, in any classical sense, can no more exist without authorial intention, and its evaluation, than it can without an audience.
… literature is that neuter, that composite, that oblique into which every subject escapes, the trap where all identity is lost, beginning with the very identity of the body that writes. (Barthes, The Death of the Author)
The pronouncement is made after a digression on Sarrasine, a novella by Balzac concerning a man who falls in love with a castrato disguised as a female who the protagonist describes (obviously incorrectly) as a near-perfect distillation of womanhood. Barthes declares that, in such lines, it is impossible to tell who is speaking, the protagonist or Balzac or “universal wisdom” or “romantic psychology” (though all of these “voices” would, of necessity, be under the direction of Balzac’s, yet Barthes, for a reason never stated, classes them as if they were distinct persons), hence his reference to the ‘oblique’ nature of literature.
Literature is indeed a composite, but it is in no way an “oblique into which every subject escapes” (the subject of desire—for the author, the successful completion and warm reception of his creation—cannot, de jure, vanish into itself) nor a “trap where all identity is lost” for literary style is every bit as distinctive as a fingerprint (ie. stylometry). Barthes is correct insofar as he realizes that there can be multiple “voices” within a work, but this in no way invalidates the stewardship of the author or authors (as in a collaborative effort). Indeed, upon the topic, Barthes himself writes “Balzac, speaking of a castrato…” as if he already understands and accepts what he attempts to undermine—that there is but a singular guide (a master “voice”). That admission, itself, undermines the whole premise.
Probably this has always been the case: once an action is recounted, for intransitive ends, and no longer in order to act directly upon reality — that is, finally external to any function but the very exercise of the symbol — this disjunction occurs, the voice loses its origin, the author enters his own death, writing begins. (Barthes)
Intransitivity is a verb property (in distinction to transitivity). Intransitive ends, then, are those which exclude questions of what or whom, confining description instead to the where, when and how—already they are not “external to any function but the very exercise of the symbol.”
Nevertheless, the feeling about this phenomenon has been variable; in primitive societies, narrative is never undertaken by a person, but by a mediator, shaman or speaker, whose “performance” may be admired (that is, his mastery of the narrative code), but not his “genius”
Narrative is, at its point of origin, always undertaken by a person, regardless of the character or stage of development of the narrator’s society. The reason why those who performed ancestral narratives did not claim (creative narrative) genius is rather obvious, they did not create the stories they communicated, and knew that others would know it—one would not expect to hear a modern playwright adapting Macbeth to claim (creative narrative) “genius” in the enterprise, and if one did hear such a pronouncement, he or she would likely be swiftly reproached for it.
…it is logical that with regard to literature it should be positivism, resume and the result of capitalist ideology, which has accorded the greatest importance to the author’s “person” The author still rules in manuals of literary history, in biographies of writers, in magazine interviews, and even in the awareness of literary men, anxious to unite, by their private journals, their person and their work; the image of literature to be found in contemporary culture is tyrannically centered on the author…
His low opinion of authors is again clear in his encircling of person with quotation marks, as if the “person” of the author were merely an illusion, a view which is more elaborately expressed in Empire of Signs (1982), in which he speaks emphatically of undoing “our own reality” (page 6).
Returning to his essay—If one is speaking of literary history or biographies of writers then the writer should take center stage (particularly in the latter example). To say that “biographies of writers” should not be “centered on the author” is the same as saying that biographies shouldn’t exist. That, of course, does not mean one should not mention reader’s reactions and the change effected by the public reception of a text; consequently it may (or may not) be fair to criticize a historical (but not biographical) work concerning literature which sets out to cover a given period comprehensively, and yet focuses on an author (or set of authors) at the expense of all else.
Though the Author’s empire is still very powerful (recent criticism has often merely consolidated it), it is evident that for a long time now certain writers have attempted to topple it. In France, Mallarme² was doubtless the first to see and foresee in its full extent the necessity of substituting language itself for the man who hitherto was supposed to own it; for Mallarme, as for us, it is language which speaks, not the author: to write is to reach, through a preexisting impersonality — never to be confused with the castrating objectivity of the realistic novelist — that point where language alone acts, “performs,” and not “oneself”…
There is little to be said of this passage other than that Barthes is confused as to the topic of agency. He deploys “speaks” metaphorically, of course, and yet, to consider his example in the literal register proves clarifying—for language cannot act or perform of its own accord anymore than a organ can play itself or a candle kindle its own flame.
Barthes then digresses, at considerable length, on a number of writers, including the previously mentioned Mallarme, as well as Proust. I’ll not dwell upon these passages, as they are merely reiterations of his previously mentioned belief that “language… speaks, not the author,” to which one might sardonically reply, “It is the coconut which uses amphioctopus marginatus, not amphioctopus marginatus which uses the coconut.”
… linguistics has just furnished the destruction of the Author with a precious analytic instrument by showing that utterance in its entirety is a void process, which functions perfectly without requiring to be filled by the person of the interlocutors: linguistically, the author is never anything more than the man who writes, just as I is no more than the man who says I: language knows a “subject,” not a “person,” end this subject, void outside of the very utterance which defines it, suffices to make language “work,” that is, to exhaust it.
A void, in any literal sense, would be void of any ‘process’ One may talk all one likes about what rocks are like in the absence of sensing apparatuses to perceive them, but it doesn’t fundamentally matter, for there would then be nothing for the rocks to matter to. They would be, but they would not, could not, matter. The case is the same with regard to language. Language does not know a ‘subject.’ Language does not know. Language is not an agent.
The absence of the Author (with Brecht, we might speak here of a real ‘alienation:’ the Author diminishing like a tiny figure at the far end of the literary stage) is not only a historical fact or an act of writing: it utterly transforms the modern text (or — what is the same thing — the text is henceforth written and read so that in it, on every level, the Author absents himself). Time, first of all, is no longer the same. The Author, when we believe in him, is always conceived as the past of his own book: the book and the author take their places of their own accord on the same line, cast as a before and an after: the Author is supposed to feed the book — that is, he pre-exists it, thinks, suffers, lives for it; he maintains with his work the same relation of antecedence a father maintains with his child. Quite the contrary, the modern writer (scriptor) is born simultaneously with his text; he is in no way supplied with a being which precedes or transcends his writing, he is in no way the subject of which his book is the predicate; there is no other time than that of the utterance, and every text is eternally written here and now.
One can see how ironically authoritative Barthes takes his own absence of authority to be by declaring his baseless assertion to be “a historical fact.” This assertion of simultaneousness is clearly untrue, for the simple reason that the author must think of what is to be written before he or she writes.
This is because (or: it follows that) to write can no longer designate an operation of recording, of observing, of representing, of “painting” (as the Classic writers put it), but rather what the linguisticians, following the vocabulary of the Oxford school, call a performative, a rare verbal form (exclusively given to the first person and to the present), in which utterance has no other content than the act by which it is uttered… the modern writer, having buried the Author, can therefore no longer believe, according to the “pathos” of his predecessors, that his hand is too slow for his thought or his passion, and that in consequence, making a law out of necessity, he must accentuate this gap and endlessly “elaborate” his form; for him, on the contrary, his hand, detached from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin — or which, at least, has no other origin than language itself, that is, the very thing which ceaselessly questions any origin.
If writing is, as Barthes asserts, “no longer… an operation of recording, of observing, of representing…” then he could in no way record, observe or represent any lack of representation and is saying that he can say nothing.
We know that a text does not consist of a line of words, releasing a single “theological” meaning (the “message” of the Author-God), but is a space of many dimensions, in which are wedded and contested various kinds of writing, no one of which is original: the text is a tissue of citations, resulting from the thousand sources of culture. Like Bouvard and Pecuchet, those eternal copyists, both sublime and comical and whose profound absurdity precisely designates the truth of writing, the writer can only imitate a gesture forever anterior, never original; his only power is to combine the different kinds of writing, to oppose some by others, so as never to sustain himself by just one of them; if he wants to express himself, at least he should know that the internal “thing” he claims to “translate” is itself only a readymade dictionary whose words can be explained (defined) only by other words, and so on ad infinitum… Succeeding the Author, the writer no longer contains within himself passions, humors, sentiments, impressions, but that enormous dictionary, from which he derives a writing which can know no end or halt: life can only imitate the book, and the book itself is only a tissue of signs, a lost, infinitely remote imitation.
A text does consist of a line of words. That is all it consists of. Such a line may contain (and “release”) a single meaning (theological or otherwise) or a multiplicity of meanings; which it contains is dependent upon intention and the presentation and apperception thereof. Misinterpretation in no wise invalids this fact. For example, if an author writes a particular line with a single purpose, regardless of the interpretations of others, the originary meaning will always remain the same, that is to say, just as intended.
In his domain, the artist is absolute.
From this point he notes that since writing cannot be truly deciphered (since it can never mean anything definitively), literary criticism must also be done away with (curiously, he did not, after his essay’s completion, tender his resignation to the trade), along with god, reason, science and law.
… criticism (even “new criticism”) should be overthrown along with the Author.
… to refuse to arrest meaning is finally to refuse God and his hypostases, reason, science, the law.
The overthrown of criticism is a natural conclusion of the evacuation of meaning from authorship, and is just as mistaken for the same reasons. His railing against three hypostases in one ousia is shoehorned in suddenly, but isn’t wholly incorrect, for to refuse to “arrest meaning” is indeed to refuse reason, science and law, but it is not to refuse God, for one can easily apply reason, engage in science and construct and follow law, without any belief in providence whatsoever. All of this, however, is far afield of authorship and its supposed demise.
He then returns to Balzac, repeats the lines with which he opened his piece and concludes thusly,
… the true locus of writing is reading.
… a text is not in its origin, it is in its destination; but this destination can no longer be personal: the reader is a man without history, without biography, without psychology; he is only that someone who holds gathered into a single field all the paths of which the text is constituted. This is why it is absurd to hear the new writing condemned in the name of a humanism which hypocritically appoints itself the champion of the reader’s rights. The reader has never been the concern of classical criticism; for it, there is no other man in literature but the one who writes. We are now beginning to be the dupes no longer of such antiphrases, by which our society proudly champions precisely what it dismisses, ignores, smothers or destroys; we know that to restore to writing its future, we must reverse its myth: the birth of the reader must be ransomed by the death of the Author.
One of the reasons why “the death of the author” has become so popular is because of the high regard it supposedly has for the reader, which is taken up as a rallying cry—Barthes and his acolytes striking out against supposed ivory tower art snobs—and yet, consider his opinion of the readers in this last passage. For Barthes, readers are “without biography” and “psychology” merely a vector for the transmission of signification. This is well in keeping with the rest of the article, but it is completely out of step with the contemporary valorizations of the “death of the author.” Further, not only does there not need to be any antipathic bifurcation between authors and readers, there cannot be, for that is to propose a waltz without a partner where the loner refuses to box-step and the music plays itself. Or as Lamos of Films Lie put it,
The death of the Author is also the inability to create, invent, or be original. It is the spinning out of control into the abyss of multiple meanings and inevitable meaninglessness.
A declaration nullified by its very pronouncement.
In closing, I am reminded of a quote by Simon Leys, who, in his essay The Imitation of Our Lord Don Quixote, wrote,
Literary critics do fulfil a very important role… but there seems to be a problem with much contemporary criticism, and especially with a certain type of academic literary criticism. One has the feeling that these critics do not really like literature—they do not enjoy reading. Worse even, if they were actually to enjoy a book, they would suspect it to be frivolous.
¹ Roland Gerard Barthes was a literary critic and scholar of semiotics educated at the University of Paris and the author of numerous works including, Writing Degree Zero (1968), Empires of Signs (1983) and Criticism and Truth (1987).
² Etienne Mallarme (pen-name: Stephane Mallarme) was a French poet and literary critic, a contemporary of Rilke, Yeats and Verlaine. Mallarme was highly regarded by Huysmans, who praised the poet’s writing extensively in his 1884 novel À rebours.
In 1923, Pablo Picasso, for The Arts: An Illustrated Monthly Magazine Covering All Phases of Ancient and Modern Art, said, “We all know that art is not truth. Art is a lie that makes us realize truth, at least the truth that is given us to understand. The artist must know the manner whereby to convince others of the truthfulness of his lies. If he only shows in his work that he has searched, and re-searched, for the way to put over his lies, he would never accomplish any thing.”
The filmmaker Jean Cocteau said something similar in Le Paquet Rouge (Comœdia, 1927), wherein he wrote, “I am a lie that tells the truth.”
On January 9, 2009, at the Rochester Jewish Community Center Book Festival, the journalist and novelist Abraham “Abe” Rothman, said, “Serious fiction is a lie that tells the truth.”
It would be Rothman’s reformulation of the sentiment which would prove out over Cocteau and Picasso’s and has since been widely circulated, in various further permutations, prominently and predictably by men and women of letters (such as John Dufresne, author of The Lie That Tells the Truth).
The persistence and popularity of this adage is curious, given its spuriousness (when taken literally).
All lies are fictions, but not all fictions are lies.
To lie (from the Old English legan) is to attempt to convince some person or persons that a thing or things is true, even though one knows the thing or things asserted are false. To lie is to communicate to one’s interlocutor(s) with deceptive intent.
Rothman defined the word at the end of his aforementioned speech, “a lie–that is, a made-up, imagined untrue creation.”
This is not an accurate description of the fiction author, nor is it an accurate description of a lie. A imagined untruth may be a lie, or it may be a mistake, or it may be madness; one cannot say with certainty, as Rothman’s description elides the centrality of intention. Further, it is erroneous to conflate the “made-up” and the “imagined” with the “untrue.” Consider this from Terrance Klein of America Magazine, “Pablo Picasso once famously said that art is a lie that tells a truth. He’s right. There is something artificial about a work of art.”
Klein is, I contend, mistaken.
To explain: All extant houses were imagined (made-up) before they were built and are yet as real as their creators. And so it is for the fiction writer. Huysmans’ Durtal is not a real person, but he is a real fictional person, and is only ever presented as such. Verne’s Nautilus was no more false than a draftsman’s architectural sketch. Yet, given that fiction writing is an art, it is artificial, and not “natural” which, due the squamous vitiations of contemporary philosophy, is, as if by providential decree, conflated with falsity / deception / inauthenticity, even though the acquisition of truth, of authenticity, requires the intensification and extension of artifice, not its minimalization; for one cannot attest to the truth or falsity of a phenomena which evades the congenital senses without first developing new senses (ie. telescopes, infrared cameras, Geiger counters…) and concepts to interpret, govern and implement them, and such apparatuses emerge only through the invention, or rediscovery, of grounding and controlling concepts, which fiction, in no small measure, can, readily and guilelessly, provide.
§00 In a July 29th episode of the New Culture Forum Peter Whittle engaged in a discussion with English philosopher, author and perpetual comb-eschewer, Sir Roger Scruton on the topic of beauty. It is a fascinating and wide-ranging discussion, covering everything from contemporary art to political censure. One recurring issue caught my attention, however, as deserving of some critical attention: Scruton’s characterization of beauty as both fundamental and antithetical to utility. The topic of beauty is one which Scruton has given much careful thought to and so I do not wish to be dismissive to such assertions, but one of his chief contentions is, I shall argue, wrong, and wrong in a very simple way.
§01 For example, he noted, “What I would say is the most important aspect of beauty is that we are at home with it. Even when it shocks us, or challenges us. You just go back to the great challenging works of art, like Picasso’s middle period stuff or the Stravinsky ballets… there, whatever we think about it, we stand back after a while and think, ‘Yes I can bring this into my life. And it is part of me and its inviting me to be part of it and it a part of me.’ That invitation is, I think, essential to our sense of beauty. In ordinary life were not aesthetes, most of us, we don’t go around the world looking for the sublime experience that you can get from the Shakespeare sonnets or Tristan and Isolde or whatever, we go around the world wanting to find the places where we could be—places which don’t repel us, which don’t say ‘go away.’ Which, on the contrary, open some kind of inner door, and I think that’s what everyday beauty is like and we’re all able to produce it. When we’re given a room and a bit of furniture, we start arranging it, so it is like that. So that we belong and it belongs. And I think that is what the instinct for beauty is and why its absolutely necessary for us and more necessary today than its ever been before, precisely because its so rare and also because the surrounding world is dominated by a utilitarian culture—everything is conceived in functional terms, as a means to an end. Certainly you see that in architecture. You know, its all straightforward, simple engineering devices to perform a particular function but which don’t have any ability to put the surrounding people at ease with them.”
I find Scruton’s description of beauty (and, to a lesser degree, contemporary ‘utilitarian culture’) to be exceptionally deft, but would contend that being “at home” with beauty is its utility. And if this is the utility, then in what way is it beyond or otherwise removed from the functional? Further, what, precisely is the problem with functional terminology? Everything is, in some sense, functional. Regardless of the abstractness or concreteness of a particular conceptualization, it serves some functional purpose precisely because meaning is noetically confined (ie. what would it mean to say that rocks are good in and of themselves?). The narrow deployment of “utility” as expressed in those modernist/postmodernist cultural milieus against whose currents Scruton swims, may very well put their prospective or current inhabitants ill at ease, but this is due to their lack of utility for generating comforting and secure atmosphere, not their overabundance thereof. That is to say, their problem is misapprehending what functions need to be fulfilled.
§02 “The first lesson that you learn when you begin to study the philosophy of beauty is that there is a utility in the useless. That’s what we most need to cultivate. In our own lives as well. We don’t become lovable objects by being useful, although we should lend our help to others and so on. We become lovable by enjoying the world and radiating our appreciation of it. And that is something we look for in buildings too. I always take Paris as an example.”
Here I more strongly disagree, for is not a strong determinate of whether or not one is beloved whether or not they are useful to his fellow man? I would contend that this is indeed so. The fashion of his usefulness can be manifold, but it cannot be said that any terribly useless man, however radiatingly appreciative, was found “loveable.” And so it is with buildings also. What we might say instead is that architecture should reflect the collective dreams and follies and aspirations of its inhabitants, whether transitory or permanent, and in this way might make them feel more “at home.” The quality of concordant ambiance is the hidden function, that which Scruton refers to as the “utility in the useless.” We might more curtly describe this quality as environ’d resonance.
§.00 Artistry is nothing without technicity, for the artist is nothing without his tools. Given that all tools are, at the first, conceptual, the ontological enterprise necessarily subtends both. Philosophy (as mental technicity) determines by way of an analysis of the haecceity of one’s muse(s) and subjects(s), which thus determines the technical venue(s) by which pertinent qualia may be internally refined and externally expressed (in art).
§.01 What is interesting to me, in light of this realization, is the way in which the artist (and not merely the designer) as a general matter, takes ontologic assertions (the real purpose of art is X but not Y), as givens, without consideration. The horror writer considers the nature of his work, but does not consider the collective, inter-generational enterprise which brought the entire genre into being and so must fail to apprehend its previous purpose(s), and thus what previously worked in the genre (what linguistic tactics to deploy) even if he has a solid grasp of its present purposes[s]). The painter does not generally realize, or, at the least, does not generally remark upon, the fact that his art is based upon the primacy of a particular privileging of objects (such as Futurism privileges of speed and the machine), if the issue is raised, such a consideration is likely to be considered trivial, when it is anything but, as the ordering of objects in a painting of is central important to the purpose of the painting itself (and there is one, even if it is visible only one’s muse and thus opaque to the self). And so it is with the author, the sculptor, the illustrator, the actor, the dancer and any other type of artist. Implicit internalization and affirmation of this kind should, if recognized, be given to critical reconsideration, for failure to do so can result in a concretized implicit conceptual frame, born of unspoken ontological decision (decision is not, of necessity, the truth and most ‘ontology’ is merely psychological gratification and defense)—what we might call the ancestral decision—which vitiates the very pathways by which one’s desired or considered art would, in their absence, profligate.
§.00—The Comte de Buffon in his epigram Discours sur le style, declared, “Style is the man himself.” Schopenhaur echoed the sentiment in The Art of Literature, wherein he wrote, “Style is the physiognomy of the mind.” Which is to say: The philosophy of a designer is imbued in their constructions; whether a book or a building. Thus, the study of a work of art is also a study of its creator’s mind and the more immediate the apprehension of the style, the more forceful the character.
§.01—A prime example of this can be found amidst the 1934 concepts for the architectural competition of the People’s Commissariat of Heavy Industry (Narkomtiazhprom; NKTP) building, the finer examples of which, exhude order, precision, and sprawling, uncompromising potency. The NKTP was the successor to the VSNKh (which was split into three commissariats in 1932) and initiated the contest amidst a backdrop of industrial decentralization and administrative specialization. In total 120 entries were submitted.
The NKTP building was never constructed—some scholars contend the state never intended to see it built but only to tease out the avant-gardists from the neoclassicists—had it been, according to specifications, it would have occupied 40,000 square meters in built-out area and 110,000 square meters of usable floor area, along the Kitay-gorod (Great Possad) in Central Moscow.
§.02—Below is a small selection of some of the finer entries.
“I consider that the architecture of the Kremlin and St. Basil’s Cathedral should be subordinated to the architecture of the Narkomtiazhprom [Commissariat of Heavy Industry], and that this building itself must occupy the central place in the city.” —Notes to the Narkomtiazhprom competition
The New Space Race
Since 1972, no human manned space mission has proceeded beyond near Earth orbit.
Now, numerous countries, including but not limited to the US, China, Japan, India and Israel, seek to change that. However, the most prominent efforts promoting space colonization are not coming from governments, but from industrialists.
September 27, Space-X founder, CEO and lead designer, Elon Musk, gave a talk titled Making Humans a Multiplanetary Species at AIC on his company’s plans to colonize Mars and the numerous technical challenges entailed by the venture.
May 9, Jeff Bezos of Amazon held a talk for Blue Origin, whereat he discussed the tantalizing prospects of space colonization and what he and his company were doing to advance that cause. The near-hour long talk was titled, Going to Space to Benefit Earth, and covered a considerable amount of ground (as one must when attempting to plot out a rough trajectory for the interstellar future of a entire species).
Bezos talked at length about Earth’s resources, growth versus scarcity, and Blue Origin’s reusable rockets; however, most interestingly (to me) was his brisk discourse on O’Neill Colonies (or O’Neill Cylinders). Unlike Musk, whose talk centered on Earth-to-Mars transit for prospective future colonies without remarking what those colonies might look like or how they might be built, Bezos waxed more conceptual regarding potential rough guidelines for colonial deepspace habitats.
O’Neill Colonies were developed out of J.D. Bernal’s space colony sphere concept (aptly titled Bernal Spheres) by the American physicist, Gerard Kitchen O’Neill in a series of lectures in 1975 to 1976 and also in his 1976 book, The High Frontier: Human Colonies in Space. In brief, a O’Neill Colony was conceived of as a massive cylinder, 5 miles in diameter, 20 miles long, constrained at each end with a bearing system, that would generate artificial gravity by spinning so as to be maximally conducive to human habitation. Until Bernal and O’Neil nearly all space colonization discourse was constrained to planetary surfaces due, at least in part, to what Issac Asimov called ‘planetary chauvinism’ (who borrowed the phrase from Carl Sagan).
The Promise of the Abyss
Whilst the theoretical archive of space habitation is dense and public support strong, the like archive of oceanic habitation is somewhat thinner with public support being considerably less strong as a consequence despite the fact that it is now wholly within the realm of technological feasibility. One of the likely reasons why can be found in a line from Mr. Musk’s previously mentioned talk wherein he noted that,
“Right now, on Earth, you can go anywhere in 24 hours. I mean, anywhere. You can fly over the antartic pole and parachute out, 24 hours from now, if you want. You can get parachuted from the top of Mt. Everest — from the right plane. You can go to the bottom of the ocean. […] So, there is no physical frontier on Earth anymore.”
He is correct, as far as the surface of the earth goes (the subterranean is another story entirely), but traversing a frontier and settling a frontier are two very different things. Despite the ease with which a contemporary advanced submersible may traverse the bottom of the ocean, no permanent human settlement has ever there been created.
In terms of intercivilizational development, space colonization is the more important trajectory, this much is incontestable, as, given a sufficiently long timeline, a species-wide extinction event will eventually occur (such as the death of the sun, which would entail the evaporation of all life on Earth), thus, moving out into the solar system is a way to hedge our species’ collective bets for continued existence. That being said, there are a number of promising benefits from oceanic colonization in the short term, including resource extraction, migration alleviation, scientific and architectural experimentation and many more, all of which have their own knock-on effects (both potentially positive and negative). To further develop concrete plans for oceanic colonization, then, it behooves us to engage in a perfunctory cost benefit analysis, for if the negatives are found to outway the positives no one will want to engage in the project and if the analysis is not conducted, no one will care because no one will know. However, if the benefits of mass underwater habitation construction are found to be generally positive, the knowledge thereof will further incentivize those preternaturally exploratory few who would invariably be at the vanguard of any prospective future abyssal ventures.
Bountiful Sanitary Water
The first and most obvious benefit of ocean colonization is that, with a sufficient filtration system, one will never run out of clean water, both for consumption and sanitation. In a deepsea habitation with a reverse osmosis desalination system, the supply of clean water would be endless and energy expenditure, minimal, as (provided sufficient depth) the pressure would perform the majority of the operational heavy-lifting.
High-Yield Aquatic Farming
Crustacean, fish and mollusc farms in addition to gardens, would be both easy to maintain and provide ample, nutritional vittles, both for consumption and exportation. The novelty of deepsea base cuisine itself will, in some quarters, will likely be such to generate considerable demand.
Mineral Mining & Hydrocarbon Extraction
Polymetallic nodules, manganese crusts, metalliferous sulphidic muds and massiveconsolidated sulphides all can be exploited for metals, whilst submarine phosphorite deposits can be harvested for elemental phosphorous, fertilizer, feed and industrial chemical supplements.
New Sovereignty — Conflict Mitigation
Crowding and demographic diversity engender and intensify inter-tribal adversity, spurring a desire for system exit by those unamenable to assimilation. To alleviate future inter-group conflict by separatist designs or over-capacity migrant flow, new, submarine sovereignties can be created whereby future civil wars (the most bloody, fatal kind of warfare) are mitigated.
Mitigation Of Complications Brought About Via Sea Level Rise
The fear of sea level rise could be completely mitigated by designing oceanic habitations around the coast, whether above water-level, below-water-level or architectures capable of both floatation and submersion whilst sustaining a amenable habitat.
Orienting Design Trajectories Toward Multivarient Domain Mastery
Beyond the immediate aesthetic and material benefits of ocean colonization, the single most important aspect of designing sustainable, durable human submarine habitation is in orienting design towards mastery of the inhospitable. In place of making previously habitable domains more habitable, the ultimate goal of colony design efforts should be to make all spaces habitable — whether that domain is the deep ocean, a distant planet, or the unlit and unpopulated expanse of void-space.
Past and Continuing Attempts At Inverse Arcology
Though ocean colonization has not been as feverishly pursed as space colonization (as can be gathered from the fact that every major industrial nation has a space program, but none have similar programs for sea-floor settlement), there have nonetheless been numerous past and continuing attempts to make the sea human habitable. Before we come to the various structures and plans for watery residence, it is important to note that though many of them were not forthright attempts at colonization (widespread, long-term settlement for large populations), there is no intrinsic reason why they could not in the future. Every metropolis in the U.S. was once but a scattering of small homesteads. In like fashion, the aquatic demenses of tomorrow can only arise from gradual, granular development.
The Conshelf I, II and III
In 1962, Conshelf I was set up off Marseilles at a depth of ten meters. The structure measured 5 meters long and 2.5 meters in diameter. Two men, Albert Falco and Claude Wesly, were the first ‘oceanauts’ to live in it, completely underwater, for a week.
In 1963, Conshelf II was deployed, it was designed to function as a small village, built on the floor of the Red Sea at a depth of ten meters. Like Conshelf I, the Conshelf II was developed by Jacques-Yves Cousteau in conjunction with the French petrochemical industry.
In 1965, Conshelf III was deployed in the Mediterranean Sea, between Nice and Monaco, at a depth of 330 feet (100 m). Like stations I and II, Conshelf III was intended to function as a proof of concept habitat and pave the way for future designs of deepsea industrial bases.
Sub-Biosphere Project II
Begun in 1998, the Sub-Biosphere project (SBS2) is the brainchild of London designer and concept artist, Phil Pauley which lays out a potential submersible human habitation. SBS2 consists of 8 spheres affixed in a circle to a larger central sphere from which life support is monitored, all of which would function as biomes capable of floating or submerging beneath large bodies of water.
The Muraka, which means ‘coral’ in Dhivehi, the language of the Maldives, is a opulent villa located 16.5 feet beneath the waves of the Indian Ocean. Whilst not meant for private residence, it certainly could be used as such and shows the aesthetic allure of submarine architecture.
Project Ocean Spiral
“This is a real goal, not a pipe dream. The Astro Boy cartoon character had a mobile phone long before they were actually invented – in the same way, the technology and knowhow we need for this project will become available.” —Shimizu Corp spokesman, Hideo Imamura on project Ocean Spiral
In 2014, the prolific Japanese architectural firm, Shimizu Corporation, announced plans for Ocean Spiral, a prospective underwater city which would accomadate approximately 5000 people and draw power from the water its via thermal energy conversion.
It is projected to be operational by 2030.
In 1927, British plumian professor of astronomy at the University of Cambridge, Arthur Stanley Eddington, developed the concept of time’s arrow, which he sought to use to better explicate the asymmetry or mono-directionality of time.
One year later, in 1928, Eddington described the concept in his book, The Nature of the Physical World,
“Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase ‘time’s arrow’ to express this one-way property of time which has no analogue in space.”
Researchers led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory were, however, able to reverse Eddington’s “one-way property,” through the utilization of a cloud-accessible IBM 4-qubit quantum computer and send a simulated elementary particle back in time 1/1,000,000th of a second. Given that the researchers were working within the confines of a simulation, they obviously did not send a real particle back in time, rather, the computer mapped the dispersal and reversal of a wave.
The authors clarify the process:
“Here we show that, while in nature the complex conjugation needed for time reversal is exponentially improbable, one can design a quantum algorithm that includes complex conjugation and thus reverses a given quantum state. Using this algorithm on an IBM quantum computer enables us to experimentally demonstrate a backward time dynamics for an electron scattered on a two-level impurity.” (G. B. Lesovik, I. A. Sadovskyy, M. V. Suslov, A. V. Lebedev, & V. M. Vinokur, 2019)
This is to say, the qubits were set to function as particles which were then transformed into a wave and then they discerned a function by which the dispersal of the wave could be reversed, thereby violating the ‘simulated’ laws of physics.
Dr. Jerry Chow, Senior Manager of IBM Q Technology, remarked on the program:
“This particular research fits a category of known research that proves you can reverse operations in quantum mechanics…”
Whilst the researchers themselves waxed ambivalent on the practical, real world possibility of time reversal, it seems, at present, theoretically plausible. Whilst popular consciousness turns immediately to thoughts of time travel, in considerations of potential practical applications of time reversal, my own thoughts moved in less flashy directions.
Given that the IBM experiment reversed the flow of time one particle-wave at a time, the effects of this process on a macro level object could prove fatal for a carbon based lifeform, as, unless there was profound sychronicity of particle reversal, one could expect subatomic shredding of the test subject (depending, of course, on the scale of the reversal process). In light of these considerations, a better initial use for time reversal technology would be in space defense. Though asteroid-to-earth impacts are rare, they are regular (along cosmic, not civilizational, timescales). According to Robert Marcus, H. Jay Melosh & Gareth Collins, one asteroid the size of 99942 Apophis (370-meters in diameter) will impact the earth once every 80,000 years. It should also be noted that a asteroid does not have to directly impact earth to pose a threat to human settlements; for example, Apophis is predicted to pass 19,400 miles (31,200 kilometres) from earth, April 13, 2029; no direct impact will be made, but there is potential for indirect impact if a chuck of the object breaks off and makes it through the atmosphere. Further, the deeper our species presses into space, the greater the threat of asteroid impact, thus, space detection and defense systems are indispensable and will become only more so. Thus, if a sufficiently large apparatus generating time reversal fields could be strategically deployed, then, theoretically, future civilizations would be able to loop asteroids by throwing them back in time to just before contact with the field whereupon they would promptly re-strike the field, engendering object-stasis so long as the theoretical apparatus remained properly maintained. If the reversal process proves uneven, such that whole object transition is (then-yet) impossible, then it can still be used to tear a potentially threatening asteroid to pieces by ‘punching’ backward holes in the hazardous object thereby stretching it out along its owner previous trajectory (the arrow’s wake) and thus leaving the portion of the asteroid still tumbling along with our arrow, significantly degraded in mass and momentum so as to be rendered harmless to human habitation.
In technological society, there have been few ideas more poisonous to the general uplift of Man than the notion that the fundamental materiality of existence is invariably devoid of meaning, when, indeed, precisely the opposite is the case.
A popular view: (Naive) nihilism is the only possible outcome of a materialist, matterological or otherwise ‘naturalistic’ ontology.
Naive nihilism is here utilized to distinguish the position ‘there is no meaning’ from cartographic displacement. Cartographic displacement is used for brevity as a encapsulation of the system-wide phase-out of previously instrumental mental maps key to the generation, development of human behavior; engendering a dearth of epistemological tools, and thus ontological tools, by which to chart out the course of one’s life in a manner concurrent with those essential qualities of the organism that, for sufficient functioning, must be sated, repressed, or exercised.
The proposition (naive nihilism) literally asserted, is self-refuting, for it requires, at the first, the affirmation of meaning to substantiate itself. Meaning subtends the whole of its structure; indeed for the declaration to even be rendered sensible to its generator, it must be inscribed with meaning. To declare that the affirmation of materialism is naive nihilism, is to, at the same time, assert the meaningfulness of the supposedly negative assignation, thus engendering a conceptual paradox.
For meaninglessness to be true, meaninglessness must truly mean something.
This can be substantiated without falling into any kind of lengthy, barbed discussion of the ultimate derivation of meaning itself; it is axiomatic. Either there is meaning (invariant intelligibility given a particular matterological formation), or there is not — in the understanding of this conceptual schema one already demonstrates meanings’ existence and conversely, the nonexistence of non-meaning, for whatever would the shape of non-meaning be? How would it assume a character when its generation requires conception and its expression, linguistic inscription? This we shall call, for the sake of brevity, the fallacy of the void which is: the assertion of a true lack of thingness, or, the assertion of the true presence of nothing. The problem with such an assertion is obvious: In the mere identification of that which is not, one has already lost it, that is to say, one has already posited that which is (that which is not). As such, there can be no that which is not, without a corresponding that which is.
Thus, if nihilism is a description of one who is unable to generate meaning, nihilism, in this naive formation, is impossible, for it is to say that one is generating a nothing, yet, one cannot generate a nothing, only nothing as a conceptual placeholder for a space between some number of things. One cannot grasp a geist. One cannot obtain that which is not. There can no more be nothing in terms of value (for a valuer) than nothing in terms of material composition, as value is itself a function of material configurations. Just as darkness is not a absence of material spatiality, but the appearance of a void due the absence of light, so too are values (conditional functions) present within the organism, regardless of whether or not they are immediately apperceptible thereto. Void is a lack of clarity, not a real presence beyond conception; that is to say: it is real only in the perceptual-conceptual matrix of the observational subject-object.
There is, in short, no nothing. Or rather, every perceptible, conceptual thing is something.
To think of nothing is not to not think. Every act of being is, and is not, not. Thus: Every negation is a positive displacement of another thing-which-is. That is to say, true negation is, in the actualization, true displacement. Thus, it is the displacement of meaning (for some other) which is (or should be) the true referent of the critics of ontological nihilism (which requires no criticism, because it is impossible). Focalism, here, is of key import.
This being said, the circumstance under which one’s milieu’s meanings are insufficiently navigated, excavated, articulated and directed, is a situation well-capable of arising (and indeed, has, is and shall continue to arise); however, the problem, in such a arrangement, is not a void of meaning, but rather, a insufficient ability to mediate meaning, to soften its coarse and perpetually undulating folds. To focus upon it.
Mechanical correctives are here appropriate. That which passes as the differentiation between the mechanical and the organic, and as a consequence, the well-navigated milieu of meaning and the ill-navigated milieu of meaning, is a matter only of degrees of configuration (of both specific type, placement, interconnection and complexity) — of architectural specificity. All are expressions of particular configurations of matter, amenable to the laws of the universe, such as they are understood, all are, at every moment, undergoing change in the movement towards new forms (even if that which subtends the forms under interrogation does not, in the change, displace the form itself), whether by intensification or dissipation, which is merely intensification in a different direction than the viewer-mediated one.
Hay farming is a massive and important industry. The advent of vertical farm architecture has sparked numerous conversations on prospective redesigns of the way farming is done, but these conversations tend to focus on plants which are easily grown and harvested indoors, such as tomatoes and lettuce, but not on hay, which is understandable given that the hay-making process itself presents a number of challenges to vertical integration; however, there are a number of ways that the challenges presented by vertical hay integration (namely harvesting) may be overcome.
The possibility of vertical hay farming opens up a considerable number of opportunities and presents potential solutions to a number of problems endemic to traditional pasture and baling operations. Not only would vertical hay farming offer a more compact and modular model for the agricultural niche, it would also allow for a marked increase in the quality of the hay produced.
§01. Basic design concepts for practical VHF implementation
Hay has traditionally been farmed on fields, however, it is also possible to grow hay via a vertical-stack arrangement (modular or otherwise), similar to those arrangements conceptually (or actually) deployed in the discourse on general vertical farming. However, the mechanical intensity and complexity of the harvesting process engender a number of complications to vertical stack integration into existing structures (such as skyscrapers, urban housing tenements or shipping containers). Consequently, it is preferable, in the construction of a vertical hay farm (VHF) to build the facility anew or thoroughly renovate it.
Hay will be grown upon modular vertical stacks […]
[…] with each stack having a bed for grass-planting overtop of which will run a foldable mechanical baler for harvesting with a catch either following or directly attached (depending on the specifications of the baler itself). The balers and catchers would run on a rail-line and be designed so as to slide out of the attendant hay-layer of the stack so as to easily extract the bales (round, square, rectangular or otherwise). Alternatively, if whatever catch-mechanism was used to catch the dispatched and compressed bales was extractable, then the baler itself need not be easily detached from its attendant layer in the stack, as, for the purposes of bale acquisition, only the catch-mechanism needs to be removable/stack-detachable. Baling twine (or other materials) can be feed into the balers, either through the rail-line or from aperture on the layer above the mechanism.
[twine + power system]
[rail-line affixed baler] ⇒ [hay] ⇒ [process] ⇒ [bale] ⇒ [catch] ⇒ [horizontal removal from rail-line] ⇒ [bale extraction + storage] ⇒ [utilization]
[repeat process for all attendant vertical layers]
One of the greatest benefits of vertical hay farm would be the ability to produce year around. Winter is the worst time of year for hay, however, in a VHF the conditions are controllable and can be modulated for peak growth, regardless of the time of year. Secondarily, due to the increased control over the environment within the VHF, plants will always (baring accidents or human error) be able to be harvested at prime maturity and without the normal problems entailed by the unpredictability of the weather (this is important for numerous reasons, chiefly that weather is the number one challenge to hay farmers, as hay-grass is very weather-sensitive; if too dry, the hay will be stunted; if too wet, the hay will mold). In addition to the aforementioned boons, VHF could also greatly reduce the distance, machinery and manpower (and thus costs) of transporting the product to customers/users by simply building them around or in the areas with high hay demand where it was previously unfeasible, impractical, cost-prohibitive, or impossible before.
§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:
“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)
A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:
“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)
The author attempts to answer his selfsame query with the following two paragraphs.
“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.
As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)
§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).
§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.
Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]
“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]
§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]
“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’
Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’
So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?
Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.
This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)
§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.
“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.
Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.
Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?
That’s what I claim a machine cannot do. Let’s see why.
Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.
So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.
But this is where it gets complicated.
We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.
Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.
First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-traditionalism at the heart of the radical modernity emerging in early-20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)
§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,
“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.
Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.
For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)
§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.
§00. Practical invention following conceptual abstraction | The civilizational significance of literary art lies, firstly in model generation and secondarily, in the application the generated model. The preferred format for model dispensation varies (novels, novellas, short stories, manifestos, poems, etc…) but the effect of all great literary works is, at least in one way, the same: That the generated concepts of the fictive world are externalized so as to impact the real one by the creation of a new cultural milieu(s) or invention(s) (the latter of which will itself, if sufficiently applied, generate the former). To illustrate this point are twelve examples of literary conceptions which drove practical and significant technical invention.
§01. Creation of the credit card | Everyone knows the credit card, its conceptual inventor, Edward Bellamy, however, is considerably less well known. A college drop out and fiction author, Bellamy’s 1888 utopian scifi novel, Looking Back, prefigured both the modern debit card and contemporary department stores.
§02. Invention of the TASER | The Tom Swift series contained over 100 novels, one of which was, Tom Swift & His Electric Rifle (1911), which saw the titular hero traveling to “Darkest Africa.” Interestingly, Swift’s device formed the conceptual basis for the TASER, originally TSER (‘Tom Swift’s Electric Rifle’).
§03. Invention of the modern helicopter | In 1886 Jules Verne published the novel, Robur le Conquérant (Robur the Conqueror), also known as The Clipper of the Clouds. The story follows Robur and his airship, Albatross. It so inspired Igor Sikorsky that it lead him to invent his own flying machine; the modern helicopter.
§04. Invention of the open water submarine | After reading 20,000 Leagues Under The Sea, inventor Simon Lake became enamoured with undersea travel. As a consequence of this newfound passion he designed The Argonaut (completed 1897), the world’s first successful open-water submarine. Jules Verne congratulated him via letter. [The ‘20,000 leagues’ in Jules Verne’s Twenty Thousand Leagues Under The Sea (1870), referred to the total distance traveled whilst under the sea, not the lowest depth to which the Nautilus descended.]
§05. Invention of teleconferencing | In his book, In the year 2889 (1889), Jules Verne wrote of a technology called the ‘phonotelephote’ that allowed for “the transmission of images by means of sensitive mirrors connected by wires,” conceptually forerunning modern video-conferencing technology.
§06. Origin of the word ‘robot’ | The word ‘robot’ is a relatively new addition to the english language and finds its origin in the play Rossumovi Univerzální Roboti (1920) by Karel Čapek (1880-1938). The play concerns the story of a industrialist who creates a class of synthetic people called ‘roboti.’
§07. Inspiration for chain reaction theory | In 1932, British scientists determined how to split an atom. The same year, physicist Leo Szilard discovered H.G. Wells’ novel, The World Set Free (1914), which helped the scientist to understand “what the liberation of atomic energy on a large scale would mean.”
§08. The literary inspiration for the world wide web | In 1964 Arthur C. Clarke’s short story, Dial F for Frankenstein, was published in Playboy. The plot concerned a telephone network that becomes sentient. This concept greatly impressed Tim Berners-Lee, who later went on to MIT where he laid the groundwork for the world wide web.
§09. Creation of geostationary satellites | Between 1942 & 1945, the Venus Equilateral short story series by George Oliver Smith (also known by the pen name Wesley Long), was published in Astounding Science Fiction. The stories were the first in popular literature to make mention of geostationary orbit.
§10. Creation of the waldo/telefactor/remote manipulator | Robert A. Heinlein’s 1942 short story, Waldo, tells the tale of a genius born with crippling physical weakness, who fashions mechanical arms to ameliorate his difficulties. ‘The waldo’ (telefactor) of the nuclear industry was named in recognition of Heinlein’s innovative idea.
§11. Invention of self-replicating program | The sci-fi cyber-thriller, The Shockwave Rider (1975) by John Brenner, described a self-replicating program that spreads throughout a computer network. In 1982, Shoch and Hupp created the first computer worm (self-replicating and spreading computer virus).
§12. Inspiration for warship combat information centers | In the 1930s-40s the Lensmen novels series by E. E. Smith, proved popular with readers in its depiction of the adventures of a fantastical galactic patrol. The Directrix, a command ship featured in the series, directly inspired the creation of warship combat information centers.
Thanks for reading.
If you appreciate our work, you can support us here.