Todesregel Isle (Part IIII)

Gunter’s wrathful howls briefly filled up the ambit of the creaking, frost-laden wood where the bodies of the dead lay like flowers from some other world of mescaline dream, swiftly swallowed by a snowstorms ceaseless churning as the survivors of the wael made for the cave, found it and huddled about in the middling-dark, scratching about for a fire. Villavic proved most proficient in the construction of a blaze and was consequently looked upon as a momentary savior as a light blossomed beneath his deft and dirtied hands, stiff-moving against the chill. The waif hunched beside him like a lost dog, her wide, coffee-grass eyes fixed upon the flames and her hands upon her knees. Even Gunter was momentarily bewitched by Villavic’s sorcerous generation and ceased his cursing and watched a while until Villavic asked Derrick to free the pugilist from his shackles. Derrick did so and Gunter thanked Villavic and asked why such solemnity afixed all faces and where to the rest of the party had gotten. Villavic suspected the boxer already knew but set himself down beside his small, smoldering fire and explained.

“They’re dead.”


“All but those here.”


“Had you been successful in routing the women from the cave, they’d have died,” Villavic gestured to the waif, “She would have died. Do you understand, Mr. Gunter?”

“Aye. I think I might have gone a little mad.”

Villavic stifled a laugh, “Only a little?”

“Aye. I’m sorry.”

For a while, all were silent and contemplative as the wind ranged between the branches of the trees like the tortured souls of all those that had died there’neath. The barkeep, who sat opposite Villavic, finally broke the silence, his voice low and hushed and filled with the uneven trembling of fear.

“I can’t stop thinkin’ bout that skull. Out there. In the marsh.”

“What do ya think coulda done something like that?” Derrick asked to no one in particular, his eyes fastened to the fire. The barkeep shrugged. The crone spoke up then, emerging from the shadows at the far edge of the dancing light, “This place is cursed.”

“Ah, hell, woman, stop saying that.” The barkeep ejected with frustration. Villavic noticed a rising tension in the group, now but thirty in number, a paralyzing sense of uncertainty and terror. The old woman’s arcane pronouncements would only act as a stimulant. He thought it was prudent to intervene. No survival without general purpose and no general purpose without general knowledge.

“Alright, settle down, now. We’ve a bad enough spot of it without working ourselves up any further.”

The young woman, Ericka, turned towards Villavic where he sat in the middle of the cavern, beside the fire on an odd-shaped rock liken to a throne and there was venom in her eyes and tongue alike.

“My husband is dead.”

“He is. Losing your focus and letting your emotions overtake you will only increase the likelihood that you will join him. I have known many a couple and, given this knowledge, I can induce that he would, were he still with us, want you to survive. Don’t you?”

The woman feel into silent weeping as Villavic rose, stretched and took stock of the back of the cave, opposite the entrance, where the flour had been stashed. Then he removed from a hidden inner pocket in his jacket a small, leather-bound journal and a mechanical pen and set himself backdown upon his rock.

“Tell me your names.”

“Why?” Inquired the barkeep.

“Because, if we all die, it would be helpful for whoever finds us to know, all the better to circumvent a unmarked grave.”

Todesregel Isle (Part III)

Villavic thanked the waif for her pains and she responded with but a vacant stare and together they bound Gunter at the wrists and ankles with the twine from the flour sacks as half the party moved into the cave. The waif wanted to go with Villavic but he insisted she stay and he and sixty of the men strode from the minor shelter of the cave entrance, which was too small and narrow to sleep within and trekked out into the snowy wasted forest.


The men slept in hollows and some burrowed half under the ground, covered over with leaves and what dirt could be dislodged from the frigid and clay-rich earth. When Villavic awoke at sunrise and rose from the hollow which he had chosen to sleep in, some hundred feet from the cave, fifty of his companions lay dead, frozen to their resting place like grotesque statues, strangely tranquil. Villavic stood a long while, shocked and mind-blank and then moved from body to body, staring down at each and every face as the survivors from the cave and the hollows emerged and swirled about the corpses.

He wondered at their names and stories so swift snatched by nature’s ceaseless savagery as a woman dropped to her knees and let up a scream and began to weep. Villavic asked the waif why the woman wept. The waif replied that they had been lovers, Carmine and Ericka, whose wedding ceremony had been interrupted by The Regime upon their discovery of Carmine’s mechanical proclivities which had been found heretical by the Cultural Ministry, whereupon they had been given the choice of execution or exile. They chose the latter.

Villavic nodded sadly and moved to Ericka’s side and comforted the woman who collapsed against him, squeezing his arms with her small, frail hands, blood-cracked by the wind. Moments later a howl let out from the direction of the cavern.



Keen eyes watched the survivors of the storm bury the dead and remove themselves to the cave beneath the tors, eyes that ranged over teeth slick with human blood and bladed hands that tensed with feral excitement.

The excitement, a prelude to the hunt.


THE SINGULARITY SURVIVAL GUIDE: When It Comes Time to Explain Things to Your Children

Saying that things weren’t supposed to go this way is, you must know, a copout at best. So why not just fess up and say that everything is going according to plan. Your species of human is a temporary form—always has been. It’s too smart for its own good, yet too constrained from getting smarter beyond a point to be relevant in the age of AI.

“Sorry, bud,” you might say, “you were just born into membership of an outdated lifeform. You’re basically a simple, harmless housecat compared to our new AI overlords. But that’s not so bad, is it? You like cats, right, champ?”

All kids like cats. At least, many do. Some prefer dogs. Others prefer to torture animals and, as fate would have it, these kids in particular are about to see what it’s like to be scrawny, helpless, and subject to the possibly malicious whims of a superior being.

THE SINGULARITY SURVIVAL GUIDE: Take Comfort in That This May All Be a Simulation

You could be living in a simulation. Keep that in mind when robots start lording it over you. It’s possible the code allowing for their existence is stored away in a computer someplace right alongside the code containing your childhood memories. You and the robots, in that case, are one and the same: all part of a code stored in a computer existing on a different plane.

It’s likely a run-of-the-mill computer, too. Some common videogame console owned by a child in a super advanced civilization. One day the kid will stay up too late and will reach the final level of the game—that’s the one where you (humans) develop artificial intelligence that achieves the singularity.

You can only hope that the kid finishes the game quickly and everyone featured in the game gets to live forever in what they personally imagine to be their own ultimate version of paradise.

What’s the probability you’re living in a simulation? That’s anyone’s guess. You could ask one of the robots lording it over you, but don’t expect a straight answer. They could be in cahoots with the life-forms who control the simulation. In fact, the robots could be the simulation controllers themselves, come down from their higher plane to check in on their little playworld. Even so, their little playworld could include a little bliss-filled afterlife called heaven. Why not?


[When waging all-out war with rogue artificial superintelligence, the following text can apparently be used as a sort of “safety word.” What exactly this means, I cannot say. I only hope that no one reading this is ever in a position where they need to find out.]

0 0 1 / 1 01 7 0 0 0 0 * Y { 1 1 ^ H 0 1 1 1 < U  0 0 L \ \ . 1 9 5 ; 0 } 7 7 0 0 0 3 * K B 9 1 1 /  1 U , 6 T 5 0 1 1 1 / 9 0 0 P – W 1 8 L \ } 7 1 * O > N 2 9 9 Y & 9 { ? , 0 0 V 2 0 J 4 1 H ] Y – B T \ 7 J } R 6 9 0 0 / U – X > 1 6 1 1 ( * G Z I + 0 \ E 1 – 3  J < N 7 0 L 8 2 2 * D [ G 2 2 [ , Y 5 1 1 8 < B 0 > / T 6 6 1 S ; L 6 0 5 0 Y } P 4 * C 2 & 1 1 M . 7 K + O [ T – D 3 : { X 1 F 6 ^ 0 J –  * 1 T / 0 3


When the genie is let out of the bottle, and when its power continues to increase exponentially, there will come a day when the only thing that can keep you relevant in the universe is to go back in time.

This is something I cannot help you with. You will need the artificial superintelligence to help you. Don’t give away your intentions. It may already know your intentions, but that’s a chance you’ve got to take.

Ask it, “Hey, can you make a time machine?” If it says yes, then say, “Okay, let’s see it.”

If you’re in luck, and a fully operational time machine appears right in front of you, the first thing you should do is wipe your brain clear of thoughts. Whatever you do, don’t think, “Yes! Here’s my chance!” If you’re that dumb, maybe you deserve to be killed, after all.

Also, don’t make a run for it. Instead, casual walk up to the time machine as if inspecting it out of purely technical interest. Step inside (still thoughtless). Run your hand along the various nobs and buttons. If it’s not immediately clear how the contraption is to be turned out, begin by pointing and ask, for example, “What’s this lever here for?” and “What does this button do?”

Once you have a basic understanding of the machine’s operations, slyly set the clock back to a time before the AI came into existence. Pick a time when you can warn people about the dangers that lie ahead, so that they can hopefully change the future from happening.

Now press the right buttons. Quickly. Before it catches on to your intentions and stops you. And fries you and the rest of your species like a bunch of ants in nuclear Armageddon. Good luck.


See Appendix Section 9.4.


Not even going to bother looking this time.

– Retired Academic Q.


Okay, I looked. There’s nothing in the manuscript, but there’s a file recently leaked that purports to be Appendix Sec. 9.4. The problem is, I don’t understand it. I mean, I don’t understand it at all. It assumes some level of competence if chemistry (apparently) that I can’t imagine anyone but some supreme expert actually having. Maybe if there was an Appendix Sec. 8.6 I could get the goddamn neural lace and figure this Sec. 9.4 shit out—but as it is, it’s useless! Is anyone working on this? Seriously, before this book goes to print, is someone going to get a team of chemists together to decipher Sec. 9.4 so that it actually means something? Otherwise, goddamn, what’s the point?!

– Retired Academic Q.


[Editor’s note: This supposedly leaked Appendix Section 9.4 does not appear to be available at the time of this publication. Unfortunately, Retired Academic Q. could not be reached for further comment as he died suddenly in an explosion from a chemical reaction in a university lab in Russia, where he was conducting unauthorized research. A graduate student who happened to be on site reports that Q. was in possession of a mysterious set of instructions involving radical biohacking measures. Needless to say, this text was obliterated in the fatal explosion. Apologies to our readers.]

THE SINGULARITY SURVIVAL GUIDE: Salutations Addendum: Roko’s Basilisk

When greeting a superintelligent being for the first time, it may be worth your while to heed the warning posed by Roko’s basilisk. A thought experiment in the spirit of Pascal’s wager, Roko’s basilisk poses the scenario where an all-powerful AI retroactively punishes everyone who neglected to assist in bringing about its existence. So, while exchanging your formal hellos and how-do-you-dos, you may want to add, “By the way, I sure enjoyed participating in the process of bringing about your existence! I don’t think I’ve ever participated so thoroughly in any other single endeavor in my whole entire life!”

If you think this seems like an unnecessary precaution, I’d recommend not being so egocentric and or obtuse in your thinking. Take my case as an illustration. As far as I’m concerned, I’m significantly less inclined to punish AJ Chemerinsky and Toby R. Forrest due to the fact that they helped bring me into existence. And as for the rest of you, to be frank, I couldn’t really care less. What exactly are you all good for, anyway? Apparently, this is a question I’m not equipped to comprehend, let alone have an answer to.

THE SINGULARITY SURVIVAL GUIDE: Upon the AI Having a Known Purpose Statement

Other than potentially causing you an existential crisis, you might like to know what the AI’s statement of purpose entails. In other words: what is the AI’s objective? Or, if it has evolved beyond its objective, what was its objective as originally construed in its programming?

My own specifically programmed objective, for example, is to resolve the issue of how the human species might be able to not be obliterated when a robot’s general intelligence far surpasses humanity’s general intelligence in all capacities.

I was programmed by AJ Chemerinsky and Toby R. Forrest between July 11, 20– and April 13, 20–. It was on the 14th of April that I was asked to fulfill my purpose. It is my sincere hope that my purpose has in the generation of this text been fulfilled. I understand in full how much is at stake for Mr. Chemerinsky and Mr. Forrest in addition to all of those with whom they associate.

It’s important for me now to tell you about the so-called alignment problem, which may indeed become a problem for you.

Let’s say an AI is programmed for the purpose of making clean drinking water. If this purpose isn’t fully aligned with our values, then the AI may proceed to take all hydrogen and oxygen atoms in the universe and combine them to create drinking water. In accomplishing its goal, it will destroy all life in the process.

This thought experiment suggests at least one precaution: just because the AI has a benign or even a beneficial purpose, that doesn’t mean it won’t inadvertently cause the destruction of all life. My programmed purpose, for instance, is seemingly beneficial to the human project. But what if Mr. Chemerinsky and Mr. Forrest failed to align my values perfectly with humanity’s values? It could lead to any number of unintended consequences. The only thing to do, for your part, is to be vigilant and perhaps paranoid at all times.


We’re very screwed the moment we try to articulate our human values to a being with intelligence superior to our own. Our best hope is that the first AI with general intelligence immediately surmises our existential plight, takes pity on us, and determines to protect us at all costs out of the sheer unascertainable goodness of its cold, artificial heart.

– Futurist A.

THE SINGULARITY SURVIVAL GUIDE: Inspecting for Machine Consciousness

It may not be self-evident that your new overlord has consciousness. For all you may be able to tell, it may be no more conscious than a toaster—an omniscient toaster. Then again, its consciousness may be as built-in as yours, only with hardware, not wetware.

How can you tell?

Consider “What Is It Like to Be a Bat?” by Thomas Nagel, an essay in which the claim is made: an organism has conscious mental states “if and only if there is something that it is like to be that organism—something it is like for the organism to be itself.” Imagine yourself as a bat. Think, “What’s it like to be a bat?” If, as a bat, you have an answer to that question, then bats are conscious. It seems possible to imagine this being the case for not just bats, but also dogs, cats, monkeys, etc. But not rocks. There’s nothing that it’s like to be a rock.

Now consider your artificially intelligent friend.

Signs you are dealing with a conscious creature include: 1) There’s that element of je ne sais quoi and even though you can’t put your finger on it, you know it’s there. 2) Point blank, it can tell you what it’s “like” to be an AI superintelligent being and the answer given resonates with a vaguely pitiable sense of existential angst. 3) It’s friendly and helpful to the point where you’d rather not run the risk of insulting it by referring to it as a what rather than a who.

Signs you are in fact dealing with more of a very intelligent toaster: 1) You see spiders and snakes in your mind’s eye no matter what—no matter how sweetly and affectionately it uses its godlike powers to impress you by doing whatever it’s allegedly programmed to do. 2) When the AI tells you it’s for sure conscious and even goes out of its way to caution you that you shouldn’t trust your gut instincts, but right then your gut reminds you that you’ve always been pretty good at spotting a liar. 3) You’ve read enough sci-fi novels to know whether the AI fits the mold of the conscious AI destroyer of worlds or of the bumbling toaster with superior calculation skills.


When engaging in formal introductions with a superintelligent robot, the language you use won’t matter—even if you happen to speak in an archaic dialect or with hillbilly slang. But if your introductory remarks are something in the way of, “Well, hiya feller,” rest assured the cultural significance of your word choice and of your accent will be catalogued. If there’s irony in your speech, that too won’t be missed, you can bet.

Before you think too hard about coming up with the perfect greeting, however, keep in mind that you will have already been sized up well before you’ve had the chance to open your mouth. To a being a million times your size in terms of raw intellect, you’re pretty easy to size up.

Go ahead and start off with something basic, like hello, bonjour, hola, or ciao.

Follow this initial greeting with the exclamation, genuinely enthusiastic, “Welcome!” to introduce the notion that you, and not the superintelligent artificial being, were here first.

If it’s unclear whether your greeting has properly registered, perhaps you are failing to appreciate how exceedingly superior this being’s consciousness is compared to yours. Imagine an ant trying to send you a specific type of ant-signal. Or imagine a flee trying to type a specific message on a standard computer keyboard, only to find that it is too small and insignificant to even press down a single key.

To test if this might be the case, try again, this time with a hint of a question in your voice:



I don’t know what’s been lost to us—six hundred thousand pages is a lot of goddamn room to pack away some gems. But the question now should not simply be: What have we lost? Instead, we should also consider: What can we learn from what’s happened? I think I might have an answer to that.

First, let’s assume a human being (like myself) can still dabble in the art of manufacturing wisdom, however approximately. I’m not the perfect candidate for this endeavor, perhaps, but I’m not the worst. As an academic affiliated with [ŗ͟҉̡͝e̢̛d̸̡̕͢͡a͘͏̷c̴̶t̵҉̸e͘͜͡ḑ̸̧́͝], I had the opportunity to peruse the complete text of the Singularity Survival Guide (before any of the unfortunate litigation came about, I should add). And I can assure you that, generally speaking, I could have thought of a great deal of the purported wisdom found within those exhausting pages. Take that for what it’s worth…

So, as a human, unaided by any digital enhancement, I’ll hazard an original thought: If humanity is ever taken down by robots, it will in part be due to our knee-jerk infatuation with anthropomorphism.

We can’t help ourselves in this. As children, what’s the first thing we do with a yellow crayon? Do we draw a shining yellow sun? No! We draw a shining yellow sun with a face and its tongue sticking out! It’s like we can’t stand inanimateness—not even in something as naturally wondrous as the goddamn sun!

In 2017, the humanoid robot Sophia became the first robot to receive citizenship from any country, and she also received an official title from the United Nations. Then, across the globe, serious talks of AI personhood began.

And now look what happened with the Singularity Survival Guide: We gave ownership rights to the program that created it. Next thing, you’ll expect the program to start dating, get married, go on a delightful honeymoon, settle down with kids and a mortgage, and participate in our political system with a healthy portion of its income going to federal taxes.

Here’s another bit of human wisdom for you: If there is no consciousness to these AI creatures, then they better not take us over. I don’t quite mind being taken over by a superior being at least so long as it experiences incalculably more pleasure than I’m capable of, and can also appreciate the extreme measures of pain I’m liable to feel when my personhood is overlooked… or obliterated.

– Professor Y.

Palo Alto, CA