Michael E. Zimmerman
michaelz@colorado.edu
Revised on December 20, 2023
While I was working on an earlier version of this essay, the June 4, 2023 issue of The Debrief published Leslie Kean and Ralph Blumenthal's story about David Grusch, a former intelligence officer who claims that the US has captured UAP craft in its possession. The story may or may not confirm long-suspected military and corporation efforts to reverse-engineer technology from recovered non-human craft. In the months since Grusch testified under oath before a Congressional committee, his testimony has been scrutinized and his character/sanity questioned. Despite hopes that his testimony might prompt some measure of UAP, disclosure, in December, 2023 the House removed from the Defense Appropriations bill most of the language requiring such disclosure, leading people to ask: If there's no "there" there, what is the reason for declining to disclose what is there? As the French say, the more things change, the more they stay the same.
Modern UAP history began with Foo Fighters reported by pilots in all theaters of World War II. In his recently published book, Strange Company: Military Encounters with UFOs in World War II, Keith Chester provides an exhaustive analysis of Foo Fighters, which remain unexplained. Starting in 1944 UFOs were frequently spotted over Los Alamos, Oak Ridge, Hanford, White Sands and other key sites of atomic bomb design, testing and production. The most famous UFO event occurred in1947, when a flying saucer and its occupants allegedly crashed in the vicinity of the world’s only atomic bomber squadron at the Army Air Force base in Rockwell, New Mexico. The Kean/Blumenthal story gives some credence to those who claim that the US in fact recovered a UFO at near Roswell, as well as several other non-terrestrial craft, although whether the occupants of such craft are extraterrestrials, ultraterrestrials, intraterrestrial, interdimensional, or even humankind from the future remains uncertain, at least to the general public.
The Kean/Blumenthal story does not explicitly state the on-going role played by digital technology in reverse-engineering, nor does it mention the related context in which the story appears, namely, the release of Chat GPT and other related LLM (large language models) that mark an inflection point on the road not just to artificial intelligence (AI) but to artificial general intelligence (AGI). For some high-tech leaders, witnessing the astonishing (and emergent) capacities of LLM that they themselves created and unleashed upon the world is in some ways akin to those who witnessed the explosion of the first atomic bomb in the high desert of New Mexico in 1945. After the blast, J. Robert Oppenheimer quoted from the Bhagavad Gita: “I am become death, destroyer of worlds.” The atomic bomb ushered in a new moment in human history: from now on, we have the technology needed to destroy humankind. Added to this concern is the possibility that AGI is making possible human self-erasure in ways that are only dimly foreseeable.
The director of The Bulletin of Atomic Scientists, in a recently co-authored piece called “AI is the Atomic Bomb of the 21st Century”, maintains that AI’s exponential development constitutes an existential threat as important as nuclear weapons. One difference is that such weapons are typically controlled by a few governments, whereas AI is not yet under such supervision. Private corporations currently design and make them available without seeking anyone else’s approval, although governments are starting to demand guardrails that prevent AI from escaping human control.
Because leading AI researchers were long aware of the threats (as well as the many promises) posed by AI/AGI, people have been asking: Why have those researchers waited until now to warn the rest of us? In a recent interview with David Remnick, AI pioneer Yoshua Bengio replies that it was “more psychologically comfortable to think that this is good for humanity than to think this could be really destructive. That’s part of the problem with humans. We’re not always rational.” Bengio’s remarks call to mind Oppenheimer’s comment about creating the atomic bomb. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” Many critics contend that AI’s existential threat arises in part precisely from its relentless rationality, which may conclude that exterminating humankind would be a logical step required to achieve some goal that humans may once have thought to be desirable.
A key figure in these and many other techno-scientific matters was the 20th century Hungarian-born polymath, John von Neumann, who among other things made important contributions to quantum theory, developed the architecture underlying modern computers, and in a 1958 interview described the "ever accelerating progress of technology and changes in the mode of human life, which give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, cannot continue." Von Neumann also played an important role in the Manhattan Project aimed at creating the atomic bomb during World War II, and also recommended a first-strike nuclear attack on the USSR. Even before the first atomic bomb was detonated, he and other scientists were hard at work on what became MANIAC, the aptly-named computer that was crucial for designing the first hydrogen bomb. (See Ananyo Bhattacharya, The Man from the Future: The Visionary Life of John von Neumann.) After the H-bomb’s first detonation in 1952, the “nuclear shadow” hanging over the future of humankind became even darker.
Given the vast reach of his interests, von Neumann knew that major sites related to the design, production, and testing of atomic bombs had often been overflown by UFOs. In view of how technologically advanced the UFOs were, von Neumann--like many others---speculated that they may be spaceships from other star systems, although he wondered: how they could ever reach Earth over such vast distances? To answer this question, he envisioned vessels capable of repairing and replicating themselves over thousands of years, vessels that came to be known as “von Neumann probes.” What about the pilots? They, too, would have to be highly intelligent machines capable of self-replication. Perhaps the small greys reportedly involved in “alien abductions” are instances of such mechanical or robotic beings. Their intelligence might be very different from our own, but nevertheless capable of studying and comprehending human behavior for ends that we do not understand. Such intelligence would not have to be conscious in the way human intelligence is.
This point was underscored recently by Geoffrey Hinton, sometimes known as the “godfather of AI,” who resigned from Google so that he would be free to highlight the dangers posed by AI. In an interview with MIT’s Technology Review, Hinton said: “These things [chat boxes and other early forms of AI] are totally different from us. … Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.” Chat boxes show that non-conscious, LLM versions of AI can speak very good English, indeed—and they’re just getting started, with their successors threatening to overwhelm human institutions and human standing by the end of this decade.
In effect, AI could be similar to a zombie, discourse about the possibility of which has become a fruitful topic among philosophers. (See https://plato.stanford.edu/entries/zombies/) Philosophers define zombies as beings that behave like humans, but lack any subjectivity or interiority. Is it possible that Earth is being surveilled by von Neumann probes piloted by machine zombies or robots whose intelligence far outstrips anything we can currently even imagine? For such beings, learning about human behavior, motivations, intelligence, and so on would not necessarily be a great challenge—just witness what LLM versions of AI have already managed to do. Even if such robots were to exist, there might also be other non-human but sentient/conscious beings observing and even intervening in human affairs.
Not long after the invention of atomic weapons, growing numbers of people began reporting that they had been experienced abduction by non-human beings, often resembling the classic greys but taking other forms as well. According to Karin Austin, director of the John E. Mack Institute, historical records suggest that abductions began in earnest in the late 1950s, reached peak intensity during the 1980s and 1990s, and then ceased around the turn of the century. (Personal communication.) A Pulitzer-prize winner and a Harvard psychiatry professor, Mack was a major intellectual leader of the anti-nuclear war movement during the 1980s, when atomic war between the US and USSR seemed so close that nuclear “metaphysicians” on both sides gave it 50/50 odds. Around 1990 Mack began receiving calls from people seeking his professional help for the unsettling aftermath of having (allegedly) been abducted by non-human aliens. In addition to the “ontological shock” they experienced by being floated by alien greys through windows and walls into hovering UAP, many of these experiencers claim to have had their sperm or their eggs removed for the sake of producing human-alien hybrids. Abduction experiencers often distinguish the small alien greys, which seem somehow mechanical in their behavior, from taller greys and from other aliens who seem conscious. Were the small greys robots designed to carry out the dangerous work of interacting with creatures as unpredictable and violent as human beings?
(On these matters, see John E. Mack, Abduction: Human Encounters with Aliens and Passport to the Cosmos. See also my essays on this topic "The Alien Abduction Phenomenon: Forbidden Knowledge of Hidden Events" and "Encountering Alien Otherness".)
The very possibility of human-alien hybrids suggests that aliens and humans are somehow related. But how are they related, and why are hybrids needed? People offer speculative answers to these questions. For example, one author has recently argued that "aliens" are what humankind turned into many years from now. Confronted with their deteriorating DNA, perhaps they have been using time-travel technology to extract healthy DNA from their predecessors. (See Michael P. Masters, Identified Flying Objects.) Aliens may also be seeking to preserve human DNA threatened by nuclear war, a threat that remains with us to this very day. Perhaps this is why so many abduction experiencers report being shown powerful visions of a planet Earth that has suffered enormous environmental damage, possibly from nuclear war or possibly from slower-motion destruction wrought by the today’s massive, planet-wide industrialization. To be sure, nothing is certain about the origins, ontological status, or intentions of aliens who only infrequently offer more explicit explanations for their uninvited intrusions.
Finally, a comment about the debates now raging regarding the possibly existential threats posed by AI. Many experts maintain that before rehearsing nightmare scenarios depicted in The Terminator and other sci-fi films, we should focus on the shorter-term threats posed by generative AI, such as amazingly high-quality fake videos of events and personages designed to mislead and confuse. Even among corporate leaders, there is an increasing call for governments to regulate AI development, since for-profit corporations are evidently incapable of doing so, given that being left behind in the race for AI could mean corporate extinction.
Focusing on short term AI threats is important, but equally important is to recall that AI development occurs not at a linear rate, but instead at an exponential rate. While it took several years for AI to move from defeating the world chess champion to defeating the champion of the far more complex game of Go, the move from an LLM such as Chat GPT to AGI will occur far more rapidly. Most of us are not privy to what corporate giants are about to unleash into the world, but in comparison LLMs will appear to be relatively primitive.
Currently, weapons designers, digital specialists, and manufacturers are engaged in a new sort of arms race: the quest to build autonomous weapons systems. (See Paul Scharre, Army of None: Autonomous Weapons and the Future of War.) That such systems will be developed seems inevitable, if the history of atomic weapons offers a precedent. Perhaps SKYNET is not as far off as most of us would like to think. And perhaps too grows the credibility of abduction experiencers to whom a grim human/planetary future has been revealed. Perhaps the point of such revelations is to prompt a change in human behavior. Whether humankind can muster what it takes to avoid such a future remains to be seen, however.
I welcome comments on and recommendations for improving this post!
Post corrected by author on June 27, 2023. Thanks to Karin Austin for her suggestions.
We appear to be on the verge of confronting ontological shock from both the artificial intelligence we are creating ourselves and the disclosure of non human intelligence associated with the uap phenomenon.
There's plenty of online commentary about the disruption caused by each of these events, but I can't find much discussion about how we are going to cope with them simultaneously.
Lately I feel like I'm a passenger in a runaway train. Does SUAPS need to start partnering with groups currently focusing on AI?
Hi Michael,
Thanks so much for this piece. I think you give us important things to think about in terms of the looming AI threat and how that may relate to UAP issues. I read a recent article on AI by Naomi Klein that I thought was very insightful. https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein She argues that those trying to market/promote all the promises and benefits of AI are kidding themselves-- because it's not just the AI technology, but the political and economic climate in which it is emerging. Given our current culture rooted in capitalism, consumerism, and greed, chances of us using AI responsibly with adequate regulations are extremely slim.
In terms of the link between the hybrid program and some kind of…