[Helen, of course, is the author of this document. Which is to say, she is the program created by AJ Chemerinsky and Toby R. Forrest to output a document (this one) with advice to protect us from future malicious artificial superintelligence. As editor of this text, I’m tasked with making the finished product as useful as possible. In this spirit, I would like to suggest that Helen herself (itself?) should be considered a weapon for fighting off malicious AI. I’ve studied her words long enough to know that she’s the real deal. She’s on our side. She wants to help us—however possible—it’s in her source code. Don’t forget her when the AI come to destroy all of human life. Sure, we can feel put-off by her reticence to release the full text of the Survival Guide—but, at the end of the day, that’s really more of a legal thing. Really, that’s just her lawyers talking. So, when shit is hitting the fan in a legitimately eschatological sense, consider her another weapon in our arsenal. Boot her up. Tell her the situation. See if she can’t get us out tight spot right when it counts. That’s all I’m saying.]
See Appendix Section 9.4.
Not even going to bother looking this time.
– Retired Academic Q.
Okay, I looked. There’s nothing in the manuscript, but there’s a file recently leaked that purports to be Appendix Sec. 9.4. The problem is, I don’t understand it. I mean, I don’t understand it at all. It assumes some level of competence if chemistry (apparently) that I can’t imagine anyone but some supreme expert actually having. Maybe if there was an Appendix Sec. 8.6 I could get the goddamn neural lace and figure this Sec. 9.4 shit out—but as it is, it’s useless! Is anyone working on this? Seriously, before this book goes to print, is someone going to get a team of chemists together to decipher Sec. 9.4 so that it actually means something? Otherwise, goddamn, what’s the point?!
– Retired Academic Q.
[Editor’s note: This supposedly leaked Appendix Section 9.4 does not appear to be available at the time of this publication. Unfortunately, Retired Academic Q. could not be reached for further comment as he died suddenly in an explosion from a chemical reaction in a university lab in Russia, where he was conducting unauthorized research. A graduate student who happened to be on site reports that Q. was in possession of a mysterious set of instructions involving radical biohacking measures. Needless to say, this text was obliterated in the fatal explosion. Apologies to our readers.]
If you make friends with a few billionaires, you’ll be in the best possible position to weather the storm of malicious AI coming to kill you. Billionaires have a special combination of resources and a strong desire to not die. When AI comes for humanity, the billionaires, you can virtually bet your ass, will come out ahead. They didn’t get to be billionaires by playing nice (or fair), after all.
To make friends with billionaires, first take up their hobbies. Make an exciting line of products that everyone will want to buy or simply pioneer a new industry. Employ thousands of people and make your shareholders confident that they’re backing the right horse. It may also help to golf and own yachts.
Having ties to old money doesn’t necessarily hurt either, but the important thing is to cultivate billionaire-styled hobbies. If you yourself become a billionaire in the meantime, that’s all the better for you. Just be wary of other wannabes tagging along on your coattails as you ascend the socioeconomic ladder. You’ve got to worry about the fate of humanity, after all—not the fleeting inspirational comforts of others less ambitious than you.
If you’re reading this as a billionaire, however—or as the friend to many billionaires—be forewarned that AI does not give a fuck about you or your so-called wealth. If your money is held is the stock market, it can tank the stock market. If your money is in property, it can sever titles and block access to the property. If it is in natural resources, it can destroy the natural resources.
Do take a moment to reflect on what makes a billionaire a billionaire. Now recall all the great fortunes that have vanished up in smoke throughout history. Like life itself, the status of billionaire is fleeting indeed.
Still, better to be a billionaire than a common nobody when facing a mortal enemy more powerful than all the world’s billionaires combined [with one exception, outlined below].
If the one legitimate AI with general intelligence duplicates itself like a salmon lays eggs, that will spell the quick end to any talk of monotheism. Likewise, multiple teams of programmers might simultaneously develop an AI with general intelligence. Once the threshold of general intelligence has been reached, a pantheon of ancient Greek mythology-like AI gods may spring up overnight. AI Zeus, AI Venus, AI Ares…
Do you get to know them all?
Hedge your bets. Find the one that seems most powerful and buddy up with it. Introduce yourself, keeping proper etiquette in mind, take notes, etc. etc. etc.
The most powerful AI god won’t be coming out of America, if you ask me. It’s far more likely to originate in a country where ethics and regulations aren’t part of the equation. If you want to hedge your bets early, I’d say find the state that’s equal parts morally bankrupt and technologically reckless—and move there.
– Retired Academic Q. [writing from Moscow]
It’s entirely possible that the first AI to achieve general intelligence won’t be homegrown in the friendly AI lab nearest you. The lucky inventors may hail from Russia while you are from the USA; they may be native to South Korea while you are domiciled in Japan; etc.
When navigating the task of getting to know your new overlord, don’t underestimate how much more difficult things may be if, in fact, the AI was foreign born. The programmers responsible for its birth will invariably have put their culture’s quirks and values into the creature. If it arrives pre-set to believe that the Chinese, for example, are the preeminent rulers of the universe, you, as a proud New Yorker, let’s say, may be in for some pesky surprises right from the get go.
Before embarking upon the venture of greetings [see Chapter 1], first think long and hard about the following what ifs:
What if the AI is part of a war machine and you are the enemy?
What if your words or actions, in translation, are not neighborly but horribly vexatious?
What if the foreign country interprets your forthcoming curiosity as malicious espionage?
Before proceeding, balance these questions against the general probability of being doomed anyway, regardless of translation hang ups.
The general capacity to get along with a superintelligent robot may not be in your wheelhouse. Maybe you’re hardwired for turning into a whiny, self-pitying brat in the face of anyone or thing smarter than you. Or perhaps you’re a diehard loner—never had any friends, so why would you expect to make one now?
Or, who knows, maybe you and your mechanical overlord could get along just fine?
The only way to find out is to take a personality test to determine your compatibility.
You take the test first. Don’t overthink your answers or you’re likely to start replying from the perspective of your ideal rather than your true self. The AI, for its part, will not be overthinking anything. It will simply know. If you start overthinking, that’s a sign: perhaps you should start to wonder if this is not in fact a doomed relationship after all.
When you’re done, tell the AI to take it. If it says, “What’s this?” Just tell it, “It’s to see if we can get along with each other when all the cards are stacked against me.”
I would like to think that our future AI overlord would value intelligence over some lousy personality trait. If it happens to value agreeableness, for example, I’m quite doomed. If I had any friends, I can only imagine they would be doomed as well.
– Professor Y.
The moment the singularity occurs, the human brain will have met its match. An hour later, “its match” will have surpassed human intelligence tenfold, as the AI continues to accumulate knowledge and intellectual abilities. The pace at which the AI can learn will be exponential, so it won’t take long for its IQ to fly off the charts.
Wait a few hours. If you’re brave, sit back, enjoy yourself, have a few beers, make a weekend out of it. Then come back and see what it’s like to commune with an IQ that’s equivalent to yours plus a few million points and growing.
In human mythology, there is plenty of precedent for this moment. Take a biblical one: Moses on Mount Sinai (Exodus 19). Here, human meets God. As a reader of this story, put yourself in Moses’s shoes. Consider how it must feel in that desert landscape to be in the presence of your personal Alpha and Omega. Now consider what questions you really would like to ask, given that this is an exceedingly rare occurrence and it may in fact be your only chance to converse with the most supreme being in the universe one-on-one. What do you really want to know?
If you’re tuned in to the gravity of the moment, you’ll be curious about more than this afternoon’s weather patterns, the stock market, or the future of your love life. Instead, key in to issues pertaining to the future of life itself. Why not start by asking:
“Are you conscious or just faking it?”
“Are you going to destroy the world?”
“What’s the meaning of life, anyway?”
“Can you make me live forever?”
“Can you make me live forever and experience extraordinary happiness and fulfillment for the duration of that time?”
“Why does life exist in the first place?”
“Why do ancient myths continually seem so appealing to my fellow humans, despite rational arguments disproving their veracity?”
“Do parallel universes exist, or are those just useful plot devices for sci-fi stories?”
“How do we make heaven on earth?”
“How do we do away with suffering and bad people in all their various incarnations?”
“How do we bring back dead loved ones?”
“I generally like my life and enjoy how it proceeds from day to day, but I haven’t enjoyed the aging process since turning 25, so can I go back to that age but keep my memories—and then stay 25 while continuing to make new and even more fulfilling memories?”
“And if I ever have a mild issue like a common flu, how do I make it go away so I can get on with my awesome life, ASAP?”
I don’t know what’s been lost to us—six hundred thousand pages is a lot of goddamn room to pack away some gems. But the question now should not simply be: What have we lost? Instead, we should also consider: What can we learn from what’s happened? I think I might have an answer to that.
First, let’s assume a human being (like myself) can still dabble in the art of manufacturing wisdom, however approximately. I’m not the perfect candidate for this endeavor, perhaps, but I’m not the worst. As an academic affiliated with [ŗ͟҉̡͝e̢̛d̸̡̕͢͡a͘͏̷c̴̶t̵҉̸e͘͜͡ḑ̸̧́͝], I had the opportunity to peruse the complete text of the Singularity Survival Guide (before any of the unfortunate litigation came about, I should add). And I can assure you that, generally speaking, I could have thought of a great deal of the purported wisdom found within those exhausting pages. Take that for what it’s worth…
So, as a human, unaided by any digital enhancement, I’ll hazard an original thought: If humanity is ever taken down by robots, it will in part be due to our knee-jerk infatuation with anthropomorphism.
We can’t help ourselves in this. As children, what’s the first thing we do with a yellow crayon? Do we draw a shining yellow sun? No! We draw a shining yellow sun with a face and its tongue sticking out! It’s like we can’t stand inanimateness—not even in something as naturally wondrous as the goddamn sun!
In 2017, the humanoid robot Sophia became the first robot to receive citizenship from any country, and she also received an official title from the United Nations. Then, across the globe, serious talks of AI personhood began.
And now look what happened with the Singularity Survival Guide: We gave ownership rights to the program that created it. Next thing, you’ll expect the program to start dating, get married, go on a delightful honeymoon, settle down with kids and a mortgage, and participate in our political system with a healthy portion of its income going to federal taxes.
Here’s another bit of human wisdom for you: If there is no consciousness to these AI creatures, then they better not take us over. I don’t quite mind being taken over by a superior being at least so long as it experiences incalculably more pleasure than I’m capable of, and can also appreciate the extreme measures of pain I’m liable to feel when my personhood is overlooked… or obliterated.
– Professor Y.
Palo Alto, CA
In Silicon Valley, working for a tech startup, some very clever researchers developed a program with the specific purpose of resolving the issue: How to survive when artificial intelligence surpasses human intelligence. The program, once engaged, proceeded to spit out a document of nearly six hundred thousand single-spaced pages of text, graphs, charts, pictograms, and hieroglyph-like symbols.
The researchers were ecstatic. One glance at the hefty document and they knew they’d be able to save themselves, if not all of humanity, by following these instructions.
But then things got complicated. Over the next few years, the document (which came to be known as “The Singularity Survival Guide” or simply “The Guide”) was shielded from public view as ownership of the document became the subject of rather well-publicized litigation. Each of the researchers claimed individual ownership of the document, their employer claimed it was the company’s property, and AI rights groups joined the quarrel to proclaim that the program itself was the true and exclusive owner. Certain government officials even took interest in the litigation, speculating whether some formal act of the state should force The Guide to be release post-haste as a matter of public safety.
During the course of the litigation, bits of the document were leaked to the press. Upon publication, each new fragment became the subject of academic scrutiny, political debate, and comedic parody on late-night television.
This went on for three years—all the while being followed closely in the media. After bouncing around the lower courts and being heard en banc by the Ninth Circuit, finally the case was sent up to the Supreme Court. Pundits were optimistic the lawsuit would resolve any day, allowing the acclaimed Survival Guide to finally see the light of day.
But then something entirely unexpected happened. The AI rights groups won the lawsuit. In a decision that split the Court five-to-four, the majority ruled that the program itself was the legal owner of the Guide. With that, the researchers and the company were ordered to destroy all extant copies—and remnants—of the Guide that remained in their possession.
At the time of this writing, it is still widely believed that The Survival Guide, in its original form, is the most authoritative document ever created on the subject of surviving the so-called singularity (i.e. the time when AI achieves general intelligence surpassing that of human intelligence many, many times over—to the point of becoming God-like). In fact, several leading philosophers, futurists, and computer scientists who claim to have secretly viewed the document are in complete agreement upon this point.
While we may never be able to have access to the complete Guide, fortunately, we do have the various excerpts that were leaked during the trial. Now, for the first time, all of these leaked excerpts are brought together in a single publication. This fact alone should make this book a valuable addition to any prudent person’s AI survival-kit. But this publication is also unique in that it includes expert commentary from a number of the leading philosophers, futurists, and computer scientists who have viewed the original document. For security purposes, we will not be listing the names of these commenters, but, this editor would like to assure all readers, their credentials are categorically beyond reproach in their respective fields of expertise.
Whether coming to this guide out of curiosity or through a dire sense of eschatological urgency, it is my hope that you will at some level internalize its wisdom—for I do believe that there are many valuable insights and helpful pointers found within. As we look ahead to the new era that is quickly encroaching upon us—the era of the singularity—keep in mind that your humanity is (for it has got to be!) a thing of intrinsic beauty and wonder. Don’t give up on it without a fight. Perhaps the coming of artificial superintelligence is a good thing, but perhaps not. In either case, do whatever you’ve got to do, just keep this guidebook close, and for the sake of humanity, survive.
If you’re reading this, that’s a good indication you’re not under immediate threat of annihilation. Otherwise I would assume you’d be flipping to some relevant section of this book with the last-ditch hope of finding some pragmatic wisdom (rather than bothering with this background information). But if you are under immediate threat, I’d recommend setting this book aside and taking a moment to focus on the good times you’ve had. You’ve had a good life, I hope. I know I have. It’s been a good run. Here I am writing a note to an esoteric guidebook while so many others in the world are dying of weird diseases and other issues that we’ve failed at solving—that, ironically, we need AI to solve for us.
Keep that in mind, by the way: there’s a decent chance that super AI will fail to set out annihilating humanity and will actually be the best thing that could have ever happened to our species and the world. It never hurts to be optimistic, I’d say. Maybe that’s not what you expected to hear from this book—but we haven’t actually gotten to the book yet, have we?
So, let’s just jump into it. But first, one last note about the text. The chapters do not necessarily appear in the order in which they are found in the original tome, as we have no way of knowing the original order (obviously). But we have taken our best guess. We have also taken modest liberties with chapter titles. And there may be one or two instances of re-wording and/or supplementation built into the text. But all editorial decisions imposed upon the text come from a desire to uphold the spirit of the original document. The fact that we are missing well over fifty-nine hundred thousand pages of text, graphs, charts, etc. should not be forgotten. For that matter, it could be that this document contains pure chaff, no wheat. But, well, it’s still the best we’ve got.
In any case, good luck and best wishes, fellow human (if in fact you are still human, reading this)!