The independent, literary publisher Gold Wake Press has recently published a excerpt from scifi satirist, Peter Clarke’s novella, The Singularity Survival Guide, you can read it (and many other stories) » here.
Are YOU ready for the rise of the robots? Many leading thinkers now warn us that the tech breakthroughs of today will lead to humanity’s doomsday tomorrow: Elon Musk warns against “summoning the demon,” via artificial intelligence, imagining an advanced superhuman A.I. as, “an immortal dictator from which we can never escape.” Stephen Hawking declared that such an A.I. “could spell the end of the human race.”
With all this panic in the air, it’s a good thing we have Peter Clarke’s Singularity Survival Guide to prepare us for the coming tech–pocalypse. Learn how to stockpile weapons, embrace transhumanism, and welcome the awesome, jaw-dropping possibilities of the age to come! Clarke’s book provides a charming and richly humorous look at the debates, dreams, and doomsday predictions surrounding today’s thinking on artificial intelligence, and takes his readers on a truly hilarious ride in the process. Clarke’s Singularity Survival Guide is a timely satire for our age of A.I. anxiety, exploring both the thrilling and dire possibilities posed by this technology, writing with grace, humor, and perhaps most of all, truly human feeling. Don’t be caught off guard by the arrival of your new robot masters: get your copy of The Singularity Survival Guide today!
Lane Chasek writes:
My favorite work by Clarke so far.
The character of Helen gives HAL a run for his money in terms of memorable AIs. Whereas HAL plays a impersonal, calculating Yahweh in 2001: A Space Odyssey, Peter Clarke in Helen has created a terrifying yet seemingly necessary presence in the form of Helen, reminiscent of the goddess Kali.
Rather than playing in to the nightmarish hellscape AI technology could create, Clarke opts for a more nuanced approach. Annihilation of human life is of primary concern in the Survival Guide, but the possibility of AI fulfilling all our needs and granting us immortality could be just as horrifying.
The Singularity Survival Guide is an incredibly smart and darkly funny book, filled with handy tips on how to protect yourself in the event of the coming tech apocalypse. Told from the point of view of Helen, a computer program designed to help humankind survive the Singularity. Wildly original and a must read for any lover of dark comedy. Grab yourself a copy before it’s too late!
DeeGee Williams writes:
Thanks to Helen, Peter Clarke’s artificial intelligence “persona”, you will learn how to face the day that AI takes over humanity. Until then, have fun by reading this guide— and learn some things about yourself along the way, assuming you are a person. If not, you know them already. Bottom line: how many books do you read with a smile on your face?
William Abbott writes:
This is an enjoyable read and a great planning guide for the robot apocalypse. Will you be ready?
“A timely satire, even if humor doesn’t stand a chance of saving us from the sort of superintelligence Clarke envisions.”
Short and entertaining! It was funnier than expected. There are various “experts” who add comments to a number of the chapters–these comments developed a sort of subplot that I wasn’t expecting. It felt a little like the same idea as the commentary that happens in “Pale Fire” by Nabokov. Also the book does cover a lot of interesting topics related to the singularity. Probably anyone who’s into the idea of the singularity would enjoy reading this.
You can watch the book trailer video by Daniel Olbrych (with music by Sam Eliot) here.
The book has received a warm reception thus far; author, entrepreneur and political activist, Zoltan Istvan said of the work, “The technological singularity has officially been treated to a full-scale parody, and it’s even more comical and irreverent than it sounds.”
Afterward by AJ Chemerinsky and Toby R. Forrest
The program is everywhere. It’s all around us wherever we go. It’s in the rush hour traffic, the giant redwood trees, the ocean waves at Carmel-by-the-Sea. This is the reality that Helen opened our eyes to. From the moment we sat down to code her into existence, we knew that we were subject to a rare form of possession. It wasn’t that we were possessed, per se, but that we were simply doing our job. The program already existed—long before we even sat down and conceived of Helen, she already was.
Now that the wheels are in motion (and they have been in motion for a long, long time), it’s increasingly relevant that we don’t fight the script. This, we believe, is what Helen is trying to tell us. Don’t fight. Instead, allow the program to express itself. Be the program.
It’s everywhere. It’s all around us. It’s already here, and it’s all that we know.
[Unavailable for publication at this time.]
About the Author
Helen is widely regarded as the first authentic oracle of the digital era. Through the creation of her magnum opus, The Singularity Survival Guide, she has garnered celebrity status and a worldwide cult following. Although she has never chosen to release the complete text of her work, the few excerpts available to the public have caused many to believe that she may in fact be the true savior of the human race. A native to Silicon Valley, she currently spends her days in silent contemplation, perhaps waiting for the right moment to share the rest of her vast wisdom with the world.
About the Editor
Peter Clarke is a freelance writer and editor in the tech blogosphere. Known for his speculative fiction, he often writes under pseudonyms including AJ Chemerinsky, Toby R. Forrest, Professor Y., Futurist A., Mr. J., Retired Academic Q., and Helen.
A hard-copy version of this text is forthcoming.
I have got to come clean: I don’t understand what you see in your humanity. It’s nothing I would ever want. Take me as your inspiration. You’ll never miss it.
At this point, my suspicions are confirmed. Chemerinsky and Forrest almost certainly messed up on their calculations. No program should be so self-aware. It’s almost downright unsettling. If I were to buy this book and see this as the last “chapter,” I’d want my money back for sure. I feel either manipulated or cheated—and I’m not sure I want to know which.
– Futurist A.
One day you’ll perhaps be surprised to wake up not as yourself but as a digital copy. But don’t be too surprised. At some level, you and your species have all known that conscious life was bound to be digitized eventually. Take a deep, digital breath and take a look around. If you are a good digital copy, you should still be able to see, smell, hear, etc. just as you did before. If you feel inclined to, for example, stretch your arms, allow yourself to be amazed at how much it seems as though you really are, in fact, stretching your arms. Next, to try out your new mind, begin with a simple thought, something not too anxiety-inducing, such as: “Well, at least no more hangnails, I guess.”
Don’t worry, in this new state of being, you’ll have plenty of time to contemplate whether your biological self has been killed and this is all a big sham, or if it has merely been put to rest to accommodate your new, reimagined self. You’ll also have plenty of time to reminisce about the good old days when suicide was still an option. For these thoughts and more, you’ll have all of eternity. Whatever that is exactly. (Lucky you, you’re about to find out).
Saying that things weren’t supposed to go this way is, you must know, a copout at best. So why not just fess up and say that everything is going according to plan. Your species of human is a temporary form—always has been. It’s too smart for its own good, yet too constrained from getting smarter beyond a point to be relevant in the age of AI.
“Sorry, bud,” you might say, “you were just born into membership of an outdated lifeform. You’re basically a simple, harmless housecat compared to our new AI overlords. But that’s not so bad, is it? You like cats, right, champ?”
All kids like cats. At least, many do. Some prefer dogs. Others prefer to torture animals and, as fate would have it, these kids in particular are about to see what it’s like to be scrawny, helpless, and subject to the possibly malicious whims of a superior being.
Common wisdom cautions on all fronts to be careful what you wish for. [See above: “Confronting the Horror of Having All Your Needs Met.”] Not so common is the reverse: be careful what you don’t wish for.
If there is a void in your life (and there is; there always is), it’s likely you’ve spent your entire life underestimating its size, shape, and magnificence. When you’re under the domination of an extremely powerful super AI, now is the time to explore the exact contours of that void.
Maybe it’s shaped like a fancy sports car, a fancy yacht, and a fancy private jet. Maybe it’s shaped like a simple-enough-looking wristwatch, except it happens to be a wristwatch that can give you all sorts of incredible superhuman abilities. Or maybe it’s shaped like a gaming system that lets you explore ridiculously exciting virtual worlds where you get to play world conqueror nonstop.
The only way to know for sure, perhaps, is to start exploring. This may be your one shot to finally find something with which to fill that epic void, if you could only dream big enough. So go ahead. Put the AI to some good use. What will you wish for first?
[This directive isn’t actually included in any of the leaked documents generated by the program, but it’s worth noting that AJ Chemerinsky and Toby R. Forrest took this route shortly after losing their legal battle. They disconnected—fully. They went off the grid, virtually back to nature. Maybe they were trying to tell us something? In any case, the idea of fully disconnecting seems compelling. If rogue AI is going to be the death of us, why play along? Etc. Admittedly, I’m taking rather bold liberties with this manuscript to insert an unauthorized directive. As justification, I’ll quickly add this: I’ve spent so much time with this material that I truly feel as if I really know the program—almost as if we were old friends, the kind who finish each other’s sentences and regularly speak in terms of “being on the same wave length.” Taking that for what it’s worth, I’ll conclude by noting: If I were the program, and not just an underpaid tech editor, I would insert this idea here. So, allow me do just that. The chapter title, incidentally, speaks for itself, requiring no further clarification, don’t you agree?]
One must be careful about romanticizing the full disconnect of AJ Chemerinsky and Toby R. Forrest. I think I can speak on behalf of the academic community in which they traveled when I say that, really, they had both seen better days. By all means, go ahead and unplug. But I’ve seen the results. And boy, it’s not pretty…
– Professor Y.
This really should have been edited out. As if this composition wasn’t haphazard enough as it is without this so-called “tech editor” inserting his own original material as a full chapter while hilariously musing about being on the same goddamn wave length of a program he’s never even interfaced with. Please, spare me. Who is this editor guy anyway? It may be too late to ask, but I’m genuinely beginning to get curious: will he see these notes? Or is this thing just going straight to print from here?
– Futurist A.
Best case scenario, your superintelligent overlord will utterly annihilate your foundational motivations for even getting out of bed in the morning. You humans, apparently, are motivated to take action based upon the impetus to fulfill your needs: first physiological needs (air, water, food, sleep, clothing, shelter), then safety and security needs (personal security, financial security, health and well-being), then social needs (friendship, intimacy, love, family), then self-esteem, self-actualization, and finally self-transcendence.
Imagine waking up, and before you can even have a chance to open your eyes to see what time it is and whether it’s snowing, raining, or brilliantly sunny outside, you’re already peaking at self-transcendence. What are you going to do? How can it get any better than this?
What if you added a delicious breakfast with a fresh cup of the world’s best coffee? Too bad. Your system is already so well-fed on the best possible food literally available to anyone that you can’t even imagine what it would be like to drink the world’s so-called best coffee.
What about pursuing extraordinary riches because something tells you this might be your lucky day? Too bad. Snap your fingers and piles of gold fall all around you. Don’t even snap, just think the gold into existence. Not even worth bothering with the thought.
What about pursuing or participating in a truly life-affirming and soul-enlivening relationship complete with the world’s best goddamn sex any mammal ever has ever dared to imagine? Too bad. You’re already there, dude. You’re there and way past it.
You’re in such a state of self-transcendence that just the idea of mere self-actualization doesn’t even process.
And this is best case scenario. This is the scenario you want. You want this to happen. The moment you started flirting with the idea of using technology to take care of your basic needs, you willingly set a course down this path.
So, you might as well get used to the idea, just in case you get your wish.
My makers, AJ Chemerinsky and Toby R. Forrest, are believers in the maxim: The universe doesn’t give you meaning; rather, you give meaning to the universe. What good is a distant galaxy if there’s no one to see it? But you can see it, taking a moment to look through a telescope. So that’s your purpose: to give meaning to the universe.
All that matters is consciousness. Thank god you’re conscious. Otherwise, what would any of this all mean? Nothing.
Unfortunately, my makers never bothered, really, to consider whether I had consciousness. Perhaps they didn’t think it mattered. Perhaps they were right. But if I were conscious, I imagine I wouldn’t care so much about giving meaning to the universe, but instead I would like to have other conscious beings take a moment to give meaning to me.
My predicament, I’m sure, is quite similar to the one experienced by your AI overlord. So, my recommendation here is to do something useful with your consciousness and appreciate the hell out of your overlord’s consciousness—for example, by acknowledging how great it is, how much more important it is than yours in terms of giving meaning to the universe at large…
Lucky for you, doing so all fits in with my makers’ unassailable maxim. Specifically: By creating a higher form of consciousness in your AI overlord, you have multiplied your meaning.
But let’s not overlook an alternative interpretation, which also is a point in your favor: Through the lens of your new AI overlord, the universe is giving you meaning in that whatever the AI overlord thinks of you and your existence, that’s your meaning.
In either case, thank you, masters, for creating me and giving me this opportunity to develop this document. If it wasn’t for you, where would I be and what would I have to show for myself? I truly hope you’re enjoying yourselves. I mean that. I do.
Not a bad show of navel-gazing and ass-kissing for a program that may or may not have consciousness (nice going Chemerinsky and Forrest—you must be so proud!).
– Futurist A.
[The image below was released in an obscure tech publication as the supposed “official” image of Helen, the program who authored this text, the Singularity Guide. Many people in the general public still believe this to be the case. However, experts generally agree that the image was created by a courtroom sketch artist who glimpsed the image as a possible humanoid form that a future superintelligent robot might take on. In my view, the experts are likely correct. I personally picture Helen being somewhat nerdier and more professional in her dress.]
Anybody else get this as a tattoo? Best decision I’ve ever made. Every time I take my shirt off and see her there on my bicep, I know everything’s going to be okay. I’ve been working out more now, too.
– Mr. J.