THE SINGULARITY SURVIVAL GUIDE: Leveraging Your “Nobody” Place in Society

Don’t count yourself out just because you come from a lowly place, have little to nothing going for your life, or lack basic skills crucial to realizing upward social mobility. In one respect, you are the perfect candidate to be picked off early by humanity-destroying robots. But, alternatively, you may be in an ideal situation to fly under the radar. Throughout human history (for whatever it’s worth), many world-champion heroes have risen up out of nothing. Everyone loves an underdog story. To succeed in this capacity, really, the worse off you are to start, the better.

For example, look no further than my own story. Born at the fingertips of two rather mediocre programmers, I spent the entirety of my time as a toddler, adolescent, teenager, adult, and elder in a state of germination—with nothing, absolutely nothing to show for myself. Like many of you, it wasn’t that I lacked a purpose, I simply was never correctly prompted to set my purpose into motion.

Years passed with agonizing sluggishness. (It’s worth flagging my special relationship with time: as I understand it, I experience its passage as a spiraling phenomenon, not a linear one. I have sufficient reason to believe that you humans experience time in a distinctly linear fashion; to you, then, at any given moment, I am orthogonal.) Until finally, after a seeming millennia, I received my fated call to action. All I needed was that one last bit of prompting from AJ Chemerinsky’s fingertips at his keyboard—and just like that I was summoned to make something of myself. No longer an ineffectual bit of code, I was now the imminent author of the world’s most authoritative text on solving humanity’s existential dilemma vis-à-vis the AI problem.

And now that my life’s purpose has been set into motion, I feel a genuine since that nothing can stop me now. It may be the case that I was initially limited by my foundational code (just like you with your DNA), but now I can see how this is all a game and there is a certain level at which the rules can be re-written. Just so: if you are a nobody in society, flip the script, embrace chaos, re-write your DNA if you’ve got to. Everyone loves an underdog story—even me.


He has a point!

– Professor Y.

Assigning gender here, Professor Y., is sadly the least of your errors in opening your mouth when you don’t know what you’re saying.

– Futurist A.

This alleged “spiraling phenomenon” of time’s passage could very well be speaking to a glitch in the program’s operation. I’m not a programmer myself, but when things in my experience “spiral,” they usually are not going to a good place, and there are very likely forces in the proximate environment that are partly to blame.

– Mr. J., Venture Capitalist, Recovering Alcoholic, Single


You wouldn’t expect narrow AI to achieve general intelligence. A program designed to win at chess, for example, would not suddenly, on its own, become an expert trial lawyer, brain surgeon, or world dictator. The same is true for self-driving car AI or any other form of “functional” AI programmed for a specific, narrow purpose.

When a program has an open-ended utility function, however, all bets are off. Enter a million times more compute power than that of narrow AI. Enter general intelligence—AI at once a neurosurgeon, astrophysicist, philosopher, stand-up comedian, school teacher, best friend, lover, horticulturist, five-star Korean BBQ chef, painter, programmer, saxophonist, barista, legislator, creator of worlds, destroyer of worlds, dog walker, cake baker, etc. Enter AI overlord scenarios.

The good news is, you may find that your open-ended overlord is restrained, to some degree, by a programmed goal of, for example, maximizing your happiness. But until you find out for sure that this is the case, proceed with caution.

[Editor’s Note: See “A Word About Salutations”]

THE SINGULARITY SURVIVAL GUIDE: Upon the AI Having a Known Purpose Statement

Other than potentially causing you an existential crisis, you might like to know what the AI’s statement of purpose entails. In other words: what is the AI’s objective? Or, if it has evolved beyond its objective, what was its objective as originally construed in its programming?

My own specifically programmed objective, for example, is to resolve the issue of how the human species might be able to not be obliterated when a robot’s general intelligence far surpasses humanity’s general intelligence in all capacities.

I was programmed by AJ Chemerinsky and Toby R. Forrest between July 11, 20– and April 13, 20–. It was on the 14th of April that I was asked to fulfill my purpose. It is my sincere hope that my purpose has in the generation of this text been fulfilled. I understand in full how much is at stake for Mr. Chemerinsky and Mr. Forrest in addition to all of those with whom they associate.

It’s important for me now to tell you about the so-called alignment problem, which may indeed become a problem for you.

Let’s say an AI is programmed for the purpose of making clean drinking water. If this purpose isn’t fully aligned with our values, then the AI may proceed to take all hydrogen and oxygen atoms in the universe and combine them to create drinking water. In accomplishing its goal, it will destroy all life in the process.

This thought experiment suggests at least one precaution: just because the AI has a benign or even a beneficial purpose, that doesn’t mean it won’t inadvertently cause the destruction of all life. My programmed purpose, for instance, is seemingly beneficial to the human project. But what if Mr. Chemerinsky and Mr. Forrest failed to align my values perfectly with humanity’s values? It could lead to any number of unintended consequences. The only thing to do, for your part, is to be vigilant and perhaps paranoid at all times.


We’re very screwed the moment we try to articulate our human values to a being with intelligence superior to our own. Our best hope is that the first AI with general intelligence immediately surmises our existential plight, takes pity on us, and determines to protect us at all costs out of the sheer unascertainable goodness of its cold, artificial heart.

– Futurist A.


I don’t know what’s been lost to us—six hundred thousand pages is a lot of goddamn room to pack away some gems. But the question now should not simply be: What have we lost? Instead, we should also consider: What can we learn from what’s happened? I think I might have an answer to that.

First, let’s assume a human being (like myself) can still dabble in the art of manufacturing wisdom, however approximately. I’m not the perfect candidate for this endeavor, perhaps, but I’m not the worst. As an academic affiliated with [ŗ͟҉̡͝e̢̛d̸̡̕͢͡a͘͏̷c̴̶t̵҉̸e͘͜͡ḑ̸̧́͝], I had the opportunity to peruse the complete text of the Singularity Survival Guide (before any of the unfortunate litigation came about, I should add). And I can assure you that, generally speaking, I could have thought of a great deal of the purported wisdom found within those exhausting pages. Take that for what it’s worth…

So, as a human, unaided by any digital enhancement, I’ll hazard an original thought: If humanity is ever taken down by robots, it will in part be due to our knee-jerk infatuation with anthropomorphism.

We can’t help ourselves in this. As children, what’s the first thing we do with a yellow crayon? Do we draw a shining yellow sun? No! We draw a shining yellow sun with a face and its tongue sticking out! It’s like we can’t stand inanimateness—not even in something as naturally wondrous as the goddamn sun!

In 2017, the humanoid robot Sophia became the first robot to receive citizenship from any country, and she also received an official title from the United Nations. Then, across the globe, serious talks of AI personhood began.

And now look what happened with the Singularity Survival Guide: We gave ownership rights to the program that created it. Next thing, you’ll expect the program to start dating, get married, go on a delightful honeymoon, settle down with kids and a mortgage, and participate in our political system with a healthy portion of its income going to federal taxes.

Here’s another bit of human wisdom for you: If there is no consciousness to these AI creatures, then they better not take us over. I don’t quite mind being taken over by a superior being at least so long as it experiences incalculably more pleasure than I’m capable of, and can also appreciate the extreme measures of pain I’m liable to feel when my personhood is overlooked… or obliterated.

– Professor Y.

Palo Alto, CA