Faughnan Home | Contact Info | Site Contents | Search

SETI, the Fermi Paradox and The Singularity: Why our search for extraterrestial intelligence has failed

  • Introduction
  • Proposition
  • Can we avoid the Singularity?
  • Links
  • History
  • Footnotes
  • Rev: 30 May 2008.


    Introduction

    The Fermi Paradox was first stated by Enrico Fermi in 1950 during a lunch time conversation. Fermi, a certified genius, used some straightforward math to show that if technological civilizations were common and moderately long-lived, then the galaxy ought to be fully inhabited [10]. The vast distances of interstellar space should not be a significant barrier to any such civilization --assuming exponential population growth and plausible technology.

    "Contact" should thus be completely inevitable; we ought to find unavoidable evidence of "little green men" all about us. Our Search for Extraterrestrial Intelligence (SETI) should have been quickly successful.

    We don't. It hasn't been. That's the paradox.

    This paradoxical failure is sometimes called "The Great Silence". The Great Silence suggests that space traveling technological civilizations are  extremely rare (or very discrete [8]). There have been a number of explanations for the why such civilizations might be rare. I list four explanations below. You can choose the one you like; they are as close to destiny as we are likely to get.

    1. Technology civilizations may rarely form. We live in a very dangerous universe. One big gamma-ray burster can sterilize a galaxy. Supernovae are common, and they sterilize a pretty good chunk of space every time they blow. Intelligence might be hard for natural selection to produce, or perhaps multicellular organisms are hard to make. This thesis was well presented in a July 2000 Scientific American article Where Are They? July 2000 by Ian Crawford. Vernor Vinge, in his science fiction murder mystery Marooned in Realtime includes "rare intelligence" among the several hypotheses he suggests.
    2. Technological civilizations may be very short-lived; they may universally fail. We've lived with nuclear weapons for a while, but our past challenges are dwarfed by our increasing "Affordable Anonymous Instruments of Mass Murder" problems. The latter problem will afflict every technologic civilization. This is the most common of the "universal failure" explanations. It is easy to see how this might be so for humanity, but need all sentient entities be as self-destructive as we are [11]?
    3. The universe we live in was designed so that we would be alone. There are a few variants on this idea, but they're fundamentally very similar. I list three here. In some ways the Fermi Paradox may be an even stronger "existence of God" argument that the usual "balance of physical parameters" argument.
      1. Some non-omnipotent entity created our universe (there are allegedly serious physicists who speculate about how one might create a universe) and deliberately tweaked certain parameters so that sentience would occur on average about once per galaxy. Maybe they lived in a crowded galaxy and thought an alternative would be interesting.
      2. God created the world in 7 days, and He made it for man's Dominion. He didn't want anyone else in our galaxy, maybe in the entire universe.
      3. Nick Bostrom makes a credible argument [9] that there's a reasonable likelihood that we exist in a simulation. If so, then perhaps the existence of an non-human civilizations does not suit the purposes of the simulation. (This could be considered a special case of "God created the world...")
    4. All technological civilizations may lose interest in exploration quickly and comprehensively, in spite of whatever pre-singular predilections they might have had. That's the theme I'll explore below. It's a kind of variant of the "self-destruction" solution, but I think it's more likely to be universal and inevitable.

    What would cause all technological civilizations everywhere to lose interest in colonizing the universe -- despite whatever biologic programming they started with? The process would have to be inescapable, perhaps an inevitable consequence of any system in which natural selection operates [1]. Vinge and others suggest that this something is the "Singularity" (see below), a consequence of hyperexponential growth. In the set of "universal failure" solutions to Fermi's Paradox this is sometimes called the Transcendental solution or the Deification solution. The following Proposition outlines a candidate process for inevitable and universal disinterest.

    Although this web page focuses on the Transcendental solution, it's likely that the "Great Silence" is multi-factorial. It is a dangerous universe, and many civilizations may have been fried by gamma-ray bursters and supernovae. Perhaps intelligence is relatively rare, perhaps we are indeed "early arrivals", perhaps some societies to self-destruct, and perhaps (as proposed below) many "transcend" and lose interest in mere physicality. In a universe as large as ours, anything that can happen will happen.

    [BTW: I first put this page up in June 2000 and I thought I was being fairly clever then. Alas, I later discovered that most (ok, all) of the ideas presented here were earlier described by Vernor Vinge in "Marooned in Realtime" -- first published in 1986 and reissued in 2001.[3] John Smart tells me he started writing about this in 1972. Many science fiction writers explored these ideas in the 1990s, including Sawyer, Baxter, Brin, etc. So, there's nothing truly new here, but I've kept the page as a possibly useful introduction. My ongoing comments on the topic are published in Gordon's Notes and periodically collected here.]

    Proposition: The transcendental solution to the Fermi Paradox

    (Originally submitted as a letter to Scientific American, June 17, 2000 in response to Where Are They? July 2000)

    The simplistic response to Fermi's paradox is that industrial civilizations inevitably self-destruct. Ian Crawford points out that it is improbable that all civilizations would do this, and that even one persistent industrial civilization should provide us with evidence of its existence. There is, however an answer to this paradox other inevitable industrial self-destruction. It is a more plausible solution, but not necessarily a more pleasant one.

    It may be that once the drive to intelligence begins, it develops an irresistible dynamic[1]. Consider the time intervals required to produce multicellular organisms (quite long), then basic processing (insects), social processing (reptiles), social communication (mammals), spoken language (human primates), writing/reading[2], and then computing. At each step in this processing curve the time intervals to the next inflection shrink.

    There is no reason to assume that the curve stops with us. There may be only a few hundred years between industrial civilization and silicon/nanonic processing. Beyond that speculation is impossible; non-organic minds would operate on qualitatively different timescales from ours. Vinge, Joy, Kurzweil and others describe this as the Singularity (see Links).

    It may be that the kinds of civilizations we might communicate with typically exist only for a few hundred years. During their short existence they produce only the kinds of radio output that we produce. In other words, they are very short-lived and the radio output is in the "dark" area of the SETI exploration space. With intense study we may detect one or two of our fellow organic civilizations in the short time that we have left -- perhaps within the next 50 to 100 years.

    We may even be able to see their radio emissions go silent; replaced by the uninterpretable communications of silicon minds. Shortly before our emissions do the same thing.

    Click for 5MB PDF version. Copyright Economist.com.
    GDP/person of Western Europe, Economist Millenium Issue

    Can we avoid the Singularity?

    What happens to those post-Singular civilizations? Assuming Singularity occurs, they would not likely be vulnerable to natural threats. Why then would they lack the ability or motivation to settle the galaxy? A bleak explanation is that no biological culture, programmed by evolution to expand and explore, survives Singularity. Many commentators have suggested humans, or human culture, might survive in some form if humans augment their biological capabilities by direct brain-computer interface.[7]

    I'm skeptical; I think it would be a bit like strapping a 747 engine to a model-T. After ignition there wouldn't be much left of the contraption. The time-signatures of a biological and abiological system are too different; one is limited by chemistry and evolution, the other by the speed of light.

    If we're lucky it will turn out that there really is some fundamental obstacle to building sentient machines, and it will take centuries or millennia to build one. In that case there's presumably some other explanation for the Fermi paradox.

    Luck would be good, but is there anything else we can do? Bill Joy advocates banning work on sentient machines, but even before 911 basic market forces would make a ban untenable. Now we have security concerns, and projects like TIA Systems will move us along just a little bit faster.

    I think of humanity like a man swept out to sea. The tide is too strong to oppose, but the swimmer suspects there's a spit of land they can grab on the way to the ocean. They swim with the current, at an angle to the full force of it, hoping to hit the last land before Tahiti.

    We can't really oppose the forces that are pushing us this way. Even if we could stop work in this area, we are in such peril from social, ecological, political and economic crises that we probably can't maintain our civilization without substantial scientific and technologic breakthroughs. We're addicted to progress, and it's too late for us to stop.

    Maybe on the way out to sea we can hit some land. We might assume we'll create our replacements, but seek to create them in a certain way -- merciful, mildly sentimental, and with a gentle sense of humor. [5]

    Links

    Vinge and Related

    Vernor Vinge retired from SDSU Aug 2000. Salon article has some 1999 notes on his work. Please email me if you learn of a site that he maintains.

    David Brin

    David Brin, best known a science fiction writer, but also a working scientist, wrote a scholarly article on this topic in 1983 (which I'd not know of until 10/05).

    David's science writings cite a few similar references. He writes::

    ...John, my 1983 paper is: Quarterly Journal of Royal Astronomical Society, fall1983, v.24, pp283-309 (27 page scanned article). Also see: Am.J.Physics Jan89 -Resource Letter on Extraterrestrial Civilization...

    The unique feature is that it remains the ONLY scientific paper about SETI that attempts to review all ideas, instead of using zero evidence to support a single adamant theory. In 22 years, there has still not been any paper as comprehensive. I was supposed to write a book, but who has time ...

    Scanning the article it concludes the most likely explanations for 'The Great Silence' are either 'hostile probes' (Brin returns to the 'hostile probes' theme 20 years later when he wrote a book in the post-Asimov Foundation series) or 'ecological catastrophe'. At that time Brin didn't reference the 'transcendental explanation' or the 'designed universe' explanation -- or at least I didn't read those. A similar article written today would probably have focused on how chaotic and risky the galaxy appears to be. In many ways the Scientific American article that inspired the original version of this page is a descendant of Brin's article.

    Other writers

    From Finland

    One day, searching the google usenet archives for the string "Faughnan" I came across a post in Finish (I think) by Otto Makela. I have no idea what Otto was saying, but he referenced this site and 3 others. Here are the 3 others:

    Transhumanism

    I'm not a transhumanist (nor, especially, its Extropian variant). For one thing, I'm way too pessimistic to qualify and I'm not a libertarian. They do, however, have an interest in these topics. Transhumanism is a relatively new version of older philosophical ideas, including some ascribed to the legendary first "philosopher" - Lucifer.

    Estimating Earthlike Planets

    Gordon's Notes

    Here's the Fermi Paradox tag-set of the blog, which includes ...

    Misc

    History

    Footnotes

    [1] If information processing (IP) is an adaptive advantage in systems where natural selection applies (eg. all systems - see below), then it will increase over time. As Vernor Vinge wrote, when an entity can use its own IP abilities to increase its own IP capability then growth is supra-exponential.

    Natural selection applies to systems where there is competition for scarce resources, inherited variability, and where some variations enhance competitive advantage. These systems may be biological or a pure information system -- such as an economy. Natural selection then applies to the human brain, human information processing tools (writing, calculating) and computing systems -- and perhaps to the cosmos itself.

    One may imagine "intelligence", or information processing, as a parasitic process which begins on simple chemical systems and migrates across various hosts. On our planet the primary host is humans, but computational devices are secondary hosts as are, to some extent, books. The host may change, but the process is self-perpetuating.

    (This is not an entirely untestable hypothesis; I suspect it has been tested in simulated evolutionary models. There is one odd historical example, though it seems so odd as to be more likely coincidental. In Matt Ridley's book Genome he writes "the brains of the brainiest animals were bigger and bigger in each successive [geologic] age: the biggest brains in the Paleozoic were smaller than the biggest in the Mesozoic, which were smaller than the biggest in the Cenozoic, which were smaller than the biggest present now" (p. 27). Unfortunately Mr. Ridley did not cite a source for this statement, but a quick Google search found this possibly relevant reference. Stephen Jay Gould prefers to think of the "species" (rather than the individual or the gene) as the natural focus of evolution; within the context of his perspective abiologic intelligence is just another variant in the history of "life".

    [2] Language seems to have arisen about 120,000 years ago, possibly after a long series of small mutations and alterations to brain structures. Symbolic reasoning as represented in cave paintings is probably about as old, but writing and reading may be more novel. The neural pathways used in reading suggest that there may have been quite recent (10,000 years) mutations that have provided most humans with a hard-wired optimized reading mechanism, a variant perhaps of synthesasia. All humans can process symbols through the frontal cortex, but that is a slow and imprecise way to read -- an optimized system is much more efficient.
    [3] Vinge's writings contain most of these ideas. He didn't directly refer to the Fermi paradox in his 1986 book, but he was explicit that the "silence" was a result of post-Singular civilizations having little interest in communication. He thought that human intelligence would grow in union with machine enhancement.

    In 1997 Toth-Fejel (The Great Silence, Fermi’s Paradox, and The Singularity) connected the Fermi Paradox to Vinge's Marooned in Real Time (1986).

    The earliest reference I've found to a 'universal singularity' explaining the Fermi Paradox is a 1982 short story by a very young Greg Bear - "Blood Music". Bear's universal doom is "squishy", in keeping with his love of biological computation, but it's clearly a singularity. At the moment Brin wins the prize for the earliest written statement of this solution for the paradox.

    Many years after Bear and Vinge, Kurzweil came to similar conclusions. See his article and within it search for "SETI". He's definitely a latecomer however.

    The implications of the Fermi Paradox are an increasingly common theme in early 21st century science fiction. Many writers are struggling to provide interesting narratives that are compatible with the Fermi Paradox, most seem to tend towards a "Transcendental" explanation (like this one). Unfortunately transcendental cultures don't make interesting stories, since they are by definition uninterpretable. Nature (March 16, 2000), for example,  published a very brief story by David Brin, "Reality Check", that provided a darkly ironic explanation of the Fermi Paradox. (Follow the link to read the story, in 1987 Brin's Lungfish also explored the Fermi Paradox.)

    [4] Moravec, Hans. Robot: Mere Machine to Transcendent Mind. Oxford.: Oxford University Press, 1999
    [5] The obvious theological comparisons are left to the reader's imagination.
    [6] The Whole Earth Review was (is?)a less commercial and somewhat smarter prequel to Wired. It has a quite interesting history with a peculiar continuity between psychedelia, new age, and post-technologic worlds. I subscribed in the late 80s and early 90s. Bizarrely I actually have this issue! It was the last one I receive, and I am reasonably certain I didn't read Vinge's article -- by then I was too busy with other projects to read the the magazine and I let my subscription lapse after this one. Now, of course, I'll have to keep it in better condition!
    [7] Andrew Lias (personal correspondence) credits Vinge with the argument that if it is very hard to build a de novo AI,  we may instead concentrate on augmenting human intelligence. That would make future human/AI "competition" less one-sided.

    Andrew/Vinge has a good point. I think we might begin with non-human bio-based AIs, at least as a transitional state. Sadly these might first be based on a dog brain, but consider the consequences of a cat-brain based AI! (Indeed in a universe as large as ours, such consequences have probably occurred. That might explain quite a bit about the way the universe works.)

    [8] Stephen Baxter's Manifold Time explores in some detail why discretion might be wise. If a space faring civilization were only a bit more nasty, ruthless, xenophobic and paranoid than we are, they would sterilize us without blinking an optical sensor. Why allow a potential rival to develop?
    [9] Are You Living in a Computer Simulation? Nick Bostrom, Department of Philosophy, Oxford University. Philosophical Quarterly (2003), Vol. 53, No. 211, pp. 243-255.

    I can imaging that post-human entities might have good reasons not to run such simulations, and I suspect post-human entities are more likely to be descendants of our machines than of ourselves. If this reasoning is correc, then we are probably not simulations (Bostrom considers these possibilities.)

    Vernor Vinge wrote a very short story for one of the millenial year issues of either Nature or Science where he explores exactly this premise in a fascinating and readable one page exposition. I can't find the reference yet however.

    It's interesting to think about how one would "detect" a simulation. As usual (sigh) Vinge has explored this in a 2004? short story about a woman who realizes she's been trapped in a simulation. In general it seems reasonable to assume that the simulation would be imperfect, that it would generate "artifacts" in the same sense that JPEG lossy compression of a stripped image will product artifacts. One might, for example, look for physical phenomena consistent with techniques that would reduce the computational demands of the simulation. Or one might attempt to expose flaws (bugs) in the simulation -- or even "crash" it (risky!). Or one identify physical phenomena that are inexplicable except in the context of a "simulation" (such as "connectedness" in theoretically disconnected parts of the universe).

    [10] The Drake Equation is a later but related work for estimating the frequency of technology civilizations.

    By way of a simplistic comparison the earth is about 1013 bacteria across; that's how many bacteria, laid end to end, would stretch from poll to poll. In other words, 10,000,000,000,000 bacteria. That's a lot.

    The galaxy is about 1016 humans across. That's about 10,000,000,000,000,000 humans. That's a lot, but the galaxy is, proportionately, only about 1,000 times bigger on our scale than the earth is to a bacterium. The earth is a sphere, not a disc, so accounting for that the space to be spanned is roughly similar (galaxy for humans, planet for bacteria).

    We believe bacteria covered the habitable zone of the earth very early in its history, from the lower crust to the upper seas. Similary, assuming exponential growth and sublight travel speeds a human-like species ought to have spread quickly across the galaxy, filling every niche. That species should be inescapable.

    So where are they? That's the Fermi Paradox. Assuming our fundamental physics is basically right, some combination of the scarcity of sentient life and the interruption of exponential growth must account for our not-overflowing galaxy.

    [11] There's a basic principle in statistics that if you have a sample of "one", you might as well assume it's average. This works because most distributions fit a "bell" curve -- and there are more items towards the center (mean) of the curve. Odds are, a random item is near average. We have a sample of one 'technologic civilization', using this rule we could consider ourselves pretty average. Some civilizations would thus be far more violent and irrational than us. We can only hope they eliminate themselves. Others, however, ought to be inversely angelic -- balance and rational. They ought not to eliminate themselves. The "filter" of self-destruction does not seem sufficiently strict.

    Author: John G. Faughnan.  The views and opinions expressed in this page are strictly those of the page author. Pages are updated on an irregular schedule; suggestions/fixes are welcome but they may take weeks to years to be incorporated. Anyone may freely link to anything on this site and print any page; no permission is needed for citing, linking,  printing, or distributing printed copies.