Michael E. Zimmerman
May 31, 2024
There are many ways in which humankind could be erased, including by a large meteor, a massive volcanic eruption, or a blast of gamma rays from a nearby stellar explosion. We know that previous such events have eradicated countless species, and there is no reason to believe that planet Earth is safe from future such catastrophes. Even though our species could either be annihilated or reduced to a remnant by such a force majeure, at least we would know in our desperate final moments that human action was not responsible for blowing out our own candle. There would be lamentation and despair, but not much room for recrimination.
There are other dreadful eventualities, however, such as all-out nuclear war or release of an “enhanced” microbe that kills 90% of the population, for which we would have only human beings to blame. Many people might conclude that this is the deserved fate of a power-thirsty species that labeled itself “intelligent,” but employed its energies recklessly in ways that brought about its own demise. Such erasure may have happened before in our vast cosmos. In a recent paper, noted astrophysicist Michael Garrett answers physicist Enrico Fermi’s famous question, “Where is everybody?” Advanced technological societies snuff themselves soon after artificial super intelligence (ASI) attains autonomous control of nuclear weapons. [2] For Earth, in his opinion, that may be only a decade or two away.
In addition to such natural and anthropogenic modes of erasure, there are two other modes that haunt anyone following current developments. These modes involve decimating human self-worth and self-regard, including the (apparently delusional) sense of human control over human destiny. First, there is the prospect of high-level disclosure that UAP are both objectively real and utterly mysterious. Second, there is the impending creation of ASI, a “singularity” that would allow humankind rapidly to be eclipsed by a far greater and to us incomprehensible “intelligence.” Particularly disturbing would be near-simultaneous disclosure of non-human UFOs and attainment of ASI.
Throughout the Cold War, UFOs surveilled US and Soviet ICBM silos, as explained by Robert Hastings in UFOs and Nukes. On multiple occasions, UFOs were apparently responsible both for shutting down missiles in their siloes and for initiating launch sequences (regarded at the time as “impossible”), only to shut them down at the last minute. Was the point of these interventions to demonstrate that UFOs could prevent nuclear war? During the 1950s, “contactees” alleged that they had met with “space brothers” who warned of the dangers posed by the nuclear arms race. Beginning in the 1960s, many of the thousands of people who experienced “alien abduction” report that they were shown scenes of planet devastated by nuclear war. Arguably, such revelations sought to encourage people to preserve planet Earth, which we may share with ultraterrestrials.
Were a film to be made of the dynamic entwinement of computers, UFOs, and, atomic weapons, it might well be called Johnny, the nickname for John von Neumann, the Hungarian-born scientist and mathematician whose extraordinary contributions to many areas--including quantum theory, stored-program computer design, the MANIAC computer, game theory (which helped to theorize the idea of “mutually assured destruction” or MAD during the Cold War), computational weather forecasting, the atomic and hydrogen bombs, and the idea of self-replicating automata (including self-repairing spacecraft capable of exploring the universe)--made him one of the most influential scientists of the 20th century. In 1957, when he was 53, von Neumann died of cancer, which may have been caused by exposure to radiation. Stanislaw Ulam, the brilliant Polish-born mathematician who worked with Edward Teller and von Neumann in developing the hydrogen bomb, reported a conversation in which von Neumann opined that “the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Today, when major high-tech players are asked to give an estimate of how massive the oncoming ASI transformation will be, they compare it not to electricity or industrialization, but rather to fire or the wheel. How could human affairs continue as usual if an event of this magnitude occurred with a couple of decades?
In The Singularity Is Near (2005), inventor and futurist Ray Kurzweil described the Singularity as the coming merger of human with computer intelligence. (His new book, The Singularity Is Nearer, will be published in summer 2024.) For years, advocates of transhumanism (that is, transitional humanism) have promoted technological innovations such as digital brain implants (now being attempted), remodeling of the human genome (now a possibility with CRISPR gene-editing technology), and so on to enhance the mortal, corrupt human body. Such steps will allow for significant life extension and overall improvement for those who can afford them. The next step would be to attain an ersatz form of personal immortality by “uploading” one’s mind into a computer that will somehow be maintained across the millennia. (Some wonder whether the first person to try such uploading will have access to an “abort” button.)
For Kurzweil and others who share his vision, the real goal and be described as techno-posthumanism, which follows when humankind melds with its own digital creations, an event that is to be celebrated rather than feared, even though Kurzweil concedes its possible dangers. In his view, this melding should be regarded as evolutionary and eschatological achievement that will eventually allow for the entire universe to becomes alive and self-conscious. Presumably, despite Kurzweil’s assurances, little or nothing of the “human” would remain as ASI unfolds toward (but never reaches) infinity.
High-tech ideologues often maintain that God-like ASI would help to solve the widespread nihilism following from the death of God at the hands of modern science. Transhumanists and techno-posthumanists alike often cite the assertion by Nietzsche’s Zarathustra that the human is merely a bridge between the ape and the Superman. Witnessing the birth of ASI might demonstrate to the human species that there was a noble purpose to human existence after all, even though humankind may be terminated by its ASI progeny not long after its emergence. Nietzsche agreed that a new god was needed to overcome nihilism, but arguably what he apparently had in mind was not ASI but rather an aesthetically extraordinary, god-like being, akin to Caesar with the soul of Christ. As far as I know, this low-tech version of Nietzsche’s Superman does not enjoy much favor in Silicon Valley.
In a famous public encounter some years ago, former friends Larry Page, co-founder of Google, and Elon Musk, founder of Tesla and Space X, disagreed about the dangers posed by development of ASI. Here is an account of their conversation as reported in the New York Times on December 3, 2023:
Humans would eventually merge with artificially intelligent machines, [Page] said. One day there would be many kinds of intelligence competing for resources, and the best would win. If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity. With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future. That insult, Mr. Musk said later, was “the last straw.”
The term “speciesism” arose fifty years ago when animal rights advocates criticized the self-asserted superiority of humankind over all other species, a stance that often denied any inherent value to non-human organisms. Depicting animals as nothing more than complex machinery has helped to justify human exploitation of animals in medical testing and factory farming. Inverting this usage, Page accuses humans of speciesism when they object to the rapid rise of an “intelligence” that might regard humans in a way similar to how many people still regard animals. Environmental philosophers often agree that humankind might be viewed as inferior by superior non-human intelligence (such as ETs), which might treat us analogously to how we treat animals. In decades teaching and writing about environmental philosophy, however, I never saw “speciesism” used to describe human resistance to creating an ASI that might first surpass and then erase us.[3]
The Page/Musk dialogue initially focused on the prospect that humankind would be obliterated by a rogue ASI, as in Skynet of first Terminator film. In a scene from the second Terminator film, released in 1991 just as the Cold War was ending, some caregivers and children are enjoying the sunny weather on a hill overlooking downtown Los Angeles. Suddenly, a nuclear bomb explodes over the city. The extraordinary heat from the blast first incinerates everyone on the playground, after which the shock wave flattens skyscrapers and then blows away the charred remains of the dead. Turning away from such doomsday scenarios, Page maintains that ASI would undermine humanity’s belief in its intellectual superiority . Like many in Silicon Valley, he urges people take pride in something else: daring to create the next and far more advanced stage of intelligence, despite the risks involved.
Let us briefly examine possible outcomes from UFO disclosure. Were the White House to make an official proclamation affirming the reality of UFOs, while simultaneously making available for inspection top secret data—such as recovered physical UFOs—what might be the result? Has there been a “soft” disclosure process aimed at cushioning this announcement? Consider the elephant in the room, however. The phenomenon of human abduction by aliens that has been only infrequently addressed in post-2017 UFO discourse. Entertaining such a frightening possibility in the back of your mind is one thing, but it would be quite another thing to explain to your children that superior non-human powers with unknown intentions have abducted many thousands of human beings and taken them into hovering UFOs for creation of hybrid beings.
In 1960, when beginning its quest for a Moon landing, NASA commissioned study by the Brookings Institute to answer this question: What would be the socio-cultural consequences of astronauts finding artificial material or structures on the Moon? According to the study, such a finding could demoralize humankind by undermining the foundations of religions such as Christianity, which assume that humankind has a very special role to play in the cosmic scheme of things. Moreover, both military and federal officials have long been concerned that the government could lose its legitimacy if it became known that the UFOs were violating US airspace at will. Given that defense against possible enemies is a basic duty of government, especially one still burned by the Japanese attack on Pearl Harbor in 1941, the federal government has long made every effort to conceal and debunk UFO sightings.[4] Even Chris Mellon, former high-ranking defense department official who has promoted UFO disclosure, concedes that there may be things that we really aren’t ready to know about. https://www.christophermellon.net/post/disclosure-and-national-security-should-the-u-s-government-reveal-what-it-knows-about-uap
Now let’s return to ASI. What would be the consequence of official (and demonstrable) confirmation that that ASI has been achieved, an event that many predict will occur between 2040 and 2045, or even earlier? Perhaps most deeply affected would be scientists and others who prize above all else the superiority of their intelligence. How would a high-ranking scientist’s self-esteem be affected by discovering that a computer had solved in two minutes a problem that she and a large team of outstanding physicists or biochemists had worked on for more than a decade? Pulitzer Prize-winning cognitive scientist and philosopher Douglas Hofstadter recently reports being very depressed that his human intelligence will soon by eclipsed ASI.[5]
Likewise, in his brilliant and disturbing book, The MANIAC, Benjamin Labutat provides an elegiac account of one man’s defeat by a computer.[6] (The book’s title refers simultaneously both to John von Neumann and to the computer that he and his team designed and built at the Institute for Advanced Studies in Princeton.) The man in question is the Korean, Lee Sedol, former world champion of the game of Go, a game far more complex than chess. The brilliant and self-assured young champion agreed to a five-game match with AlphaGo, a computer program designed by Deep Mind. The match took place between March 9 and 15, 2016 in Seoul, Korea. As widely reported, the program’s playing ability astounded Go experts, including Lee. Demoralized by being defeated in the first three of five games, Lee devised a brilliant move of his own to defeat AlphaGo in game four, although AlphaGo won game five. In a subsequent press conference, Lee said that he was representing not only Go players and Koreans, but all of humanity in his contest with the machine that defeated him handily. This was 2016, eons ago in the exponential development of algorithmic power. So despondent was Lee that a few years later he retired from the game. He is quoted as saying, “Even if I become the number one, there is an entity that cannot be defeated.” [7]
If human intelligence and creativity are shown to be vastly outstripped either by ASI or by ETs, by both, would people be content to play in the human minor leagues? Or would the ASI and ETs so demoralize humankind, that our species would spiral downward to extinction, even without being destroyed by rogue ASI? Or would the condition of ordinary human beings be so enhanced by ASI that most people would not want to return to what would appear to be the bad old days? Likewise, would UFO disclosure allow for development of relationships with non-human intelligence that could lead toward deeper, profounder, and more inclusive modes of consciousness, as many people who have experienced alien abduction have suggested? Such questions are worth posing by the billions of people waiting on the sidelines while these events play out in the near future.
[1] Here I borrow freely from Karl Marx, who—in the first lines of The Communist Manifesto--stated that: “A spectre is haunting Europe—the spectre of communism.”
[2] Michael Garrett, professor of radio astronomy at Leiden University and director of the Jodrell Bank Centre for Astrophysics, suggests that atomic weapons controlled by AI or ASI may be a major component of such a Filter. Garrett’s academic paper is titled “Is artificial intelligence the great filter that makes advanced technical civilizations rare in the universe?” Acta Astronautica, Volume 219, June, 2024, pp. 731-735.https://www.sciencedirect.com/science/article/pii/S0094576524001772?via%3Dihub I was alerted to Garrrett’s study by Tim McMillan’s essay, “Artificial Superintelligence Could Doom Humanity and Explain Why We Haven’t Found Alien Civilization, Proposes New Research,” The Debrief, May 14, 2024. https://thedebrief.org/artificial-superintelligence-could-doom-humanity-and-explain-we-havent-found-alien-civilizations-proposes-new-research. .
[3] Interested parties can find many of my essays at either Academia.com, https://colorado.academia.edu/MichaelZimmerman or at Research Gate https://www.researchgate.net/profile/Michael-Zimmerman-21 .
[4] See Alexander Wendt and Raymond Duvall’s groundbreaking essay, “Sovereignty and the UFO,” Political Theory, Vol 36, issue 4, 607-633.
[5]See David Brooks’ account of his conversation with Hofstadter, “ ‘Human Beings Are Soon Going to Be Eclipsed’”, New York Times, July 13, 2023, https://www.nytimes.com/2023/07/13/opinion/ai-chatgpt-consciousness-hofstadter.html
[6] Benjamin Labutat, The MANIAC, London: Pushkin Press, 2023.
[7]James Vincent, “Former Go champion beaten by DeepMind retires after declaring AI invincible,The Verge, November 27, 2019. https://www.theverge.com/2019/11/27/20985260/ai-go-alphago-lee-se-dol-retired-deepmind-defeat
Kenneth, just discovered you rcomment from three months ago! You may well be right in your observations. Ray Kurzweil's latest, The Singularity Is Nearer, paints an optimistic picture of how ASI (he doesn't much mention NHI) will "optimize" humankind as we merge with ASI, by adding new layers of cortex and implanting nanobots that will enable direct access to the Internet, not to mention all the medical breakthroughs, increases in food production and overall wealth, and so on. Perhaps new technology often seems like the end of the "good old days" to those who grew up in those days. "If God had meant us to fly, He would have given us wings!" Intervening in human intelligence,and creating ASI, however, interve…
Kenneth, for some technical reason I could not reply earlier. Your reference to Native Americans is importaant---in relatively recent times they underwent massive cultural, psycyological, spirtual, and geographical transformations. Presumably, unless we are wiped out (think Skynet, I suppose), humans will continue on, but what transformations will we have to undergo in the meantime? There's a film script in there somewhere!
😁
So the Single Source of everything, (God -The ALL- and sometimes called The CAUSE in the Nag Hamddi) - has through IT'S own, SUPER SUPER SUPER HIGH level of intelligence - created, through it's LOWEST LEVEL CREATION--mankind, the VERY thing - ASI -Artificial Super Intelligence - that will, is or has begun, turning on it- --Just as mankind is worried - OUR own Ai- will eventually turn on us!
Very interesting thoughts from Zimmerman. I suggest that humans will keep looking for their next meal even after NHI and ASI disclose their reality. Cro-magnon and the Aztecs cannot tell us how they felt, and Native Americans are still struggling to come to terms with their new reality.
I am optimistic. I doubt that John Henry, the steel driven man, stopped eating after being beaten by the steam engine.
I think the bulk of 27,000 high school valedictorians who graduate each year only to find themselves below the top 1% in college, have found a way to survive?
I believe the bulk of humans will care less about NHI or ASI until they cannot find their next meal.