Tags

, , , , , , ,

Let’s get back to some classic Beta Nerd… Some over abundance of white space, minimal colour and graphics and a whole-lota science. Oh hells yeah!

I found this article on faughnan.com the personal site of John Faughnan. Be sure to check out his many links and history links on the original article here.

Without further Badu, SETI, the Fermi Paradox and The Singularity

Introduction

The Fermi Paradox was first stated by Enrico Fermi in 1950 during a lunch time conversation. Fermi, a certified genius, used some straightforward math to show that if technological civilizations were common and moderately long-lived, then the galaxy ought to be fully inhabited [10]. The vast distances of interstellar space should not be a significant barrier to any such civilization –assuming exponential population growth and plausible technology.

“Contact” should thus be completely inevitable; we ought to find unavoidable evidence of “little green men” all about us. Our Search for Extraterrestrial Intelligence (SETI) should have been quickly successful.

We don’t. It hasn’t been. That’s the paradox.

This paradoxical failure is sometimes called “The Great Silence”. The Great Silence suggests that space traveling technological civilizations are  extremely rare (or very discrete [8]). There have been a number of explanations for the why such civilizations might be rare. I list four explanations below. You can choose the one you like; they are as close to destiny as we are likely to get.

  1. Technology civilizations may rarely form. We live in a very dangerous universe. One big gamma-ray burster can sterilize a galaxy. Supernovae are common, and they sterilize a pretty good chunk of space every time they blow. Intelligence might be hard for natural selection to produce, or perhaps multicellular organisms are hard to make. This thesis was well presented in a July 2000 Scientific American article Where Are They? July 2000 by Ian Crawford. Vernor Vinge, in his science fiction murder mystery Marooned in Realtime includes “rare intelligence” among the several hypotheses he suggests.
  2. Technological civilizations may be very short-lived; they may universally fail. We’ve lived with nuclear weapons for a while, but our past challenges are dwarfed by our increasing “Affordable Anonymous Instruments of Mass Murder” problems. The latter problem will afflict every technologic civilization. This is the most common of the “universal failure” explanations. It is easy to see how this might be so for humanity, but need all sentient entities be as self-destructive as we are [11]?
  3. The universe we live in was designed so that we would be alone. There are a few variants on this idea, but they’re fundamentally very similar. I list three here. In some ways the Fermi Paradox may be an even stronger “existence of God” argument that the usual “balance of physical parameters” argument.
    1. Some non-omnipotent entity created our universe (there are allegedly serious physicists who speculate about how one might create a universe) and deliberately tweaked certain parameters so that sentience would occur on average about once per galaxy. Maybe they lived in a crowded galaxy and thought an alternative would be interesting.
    2. God created the world in 7 days, and He made it for man’s Dominion. He didn’t want anyone else in our galaxy, maybe in the entire universe.
    3. Nick Bostrom makes a credible argument [9] that there’s a reasonable likelihood that we exist in a simulation. If so, then perhaps the existence of an non-human civilizations does not suit the purposes of the simulation. (This could be considered a special case of “God created the world…”)
  4. All technological civilizations may lose interest in exploration quickly and comprehensively, in spite of whatever pre-singular predilections they might have had. That’s the theme I’ll explore below. It’s a kind of variant of the “self-destruction” solution, but I think it’s more likely to be universal and inevitable.

What would cause all technological civilizations everywhere to lose interest in colonizing the universe — despite whatever biologic programming they started with? The process would have to be inescapable, perhaps an inevitable consequence of any system in which natural selection operates [1]. Vinge and others suggest that this something is the “Singularity” (see below), a consequence of hyperexponential growth. In the set of “universal failure” solutions to Fermi’s Paradox this is sometimes called the Transcendental solution or the Deification solution. The following Proposition outlines a candidate process for inevitable and universal disinterest.

Although this web page focuses on the Transcendental solution, it’s likely that the “Great Silence” is multi-factorial. It is a dangerous universe, and many civilizations may have been fried by gamma-ray bursters and supernovae. Perhaps intelligence is relatively rare, perhaps we are indeed “early arrivals”, perhaps some societies to self-destruct, and perhaps (as proposed below) many “transcend” and lose interest in mere physicality. In a universe as large as ours, anything that can happen will happen.

[BTW: I first put this page up in June 2000 and I thought I was being fairly clever then. Alas, I later discovered that most (ok, all) of the ideas presented here were earlier described by Vernor Vinge in “Marooned in Realtime” — first published in 1986 and reissued in 2001.[3] John Smart tells me he started writing about this in 1972. Many science fiction writers explored these ideas in the 1990s, including Sawyer, Baxter, Brin, etc. So, there’s nothing truly new here, but I’ve kept the page as a possibly useful introduction. My ongoing comments on the topic are published in Gordon’s Notes and periodically collected here.]

Proposition: The transcendental solution to the Fermi Paradox

(Originally submitted as a letter to Scientific American, June 17, 2000 in response to Where Are They? July 2000)The simplistic response to Fermi’s paradox is that industrial civilizations inevitably self-destruct. Ian Crawford points out that it is improbable that all civilizations would do this, and that even one persistent industrial civilization should provide us with evidence of its existence. There is, however an answer to this paradox other inevitable industrial self-destruction. It is a more plausible solution, but not necessarily a more pleasant one.

It may be that once the drive to intelligence begins, it develops an irresistible dynamic[1]. Consider the time intervals required to produce multicellular organisms (quite long), then basic processing (insects), social processing (reptiles), social communication (mammals), spoken language (human primates), writing/reading[2], and then computing. At each step in this processing curve the time intervals to the next inflection shrink.

There is no reason to assume that the curve stops with us. There may be only a few hundred years between industrial civilization and silicon/nanonic processing. Beyond that speculation is impossible; non-organic minds would operate on qualitatively different timescales from ours. Vinge, Joy, Kurzweil and others describe this as the Singularity (see Links).

It may be that the kinds of civilizations we might communicate with typically exist only for a few hundred years. During their short existence they produce only the kinds of radio output that we produce. In other words, they are very short-lived and the radio output is in the “dark” area of the SETI exploration space. With intense study we may detect one or two of our fellow organic civilizations in the short time that we have left — perhaps within the next 50 to 100 years.

We may even be able to see their radio emissions go silent; replaced by the uninterpretable communications of silicon minds. Shortly before our emissions do the same thing.

Click for 5MB PDF version. Copyright Economist.com.
GDP/person of Western Europe, Economist Millenium Issue

Can we avoid the Singularity?

What happens to those post-Singular civilizations? Assuming Singularity occurs, they would not likely be vulnerable to natural threats. Why then would they lack the ability or motivation to settle the galaxy? A bleak explanation is that no biological culture, programmed by evolution to expand and explore, survives Singularity. Many commentators have suggested humans, or human culture, might survive in some form if humans augment their biological capabilities by direct brain-computer interface.[7]

I’m skeptical; I think it would be a bit like strapping a 747 engine to a model-T. After ignition there wouldn’t be much left of the contraption. The time-signatures of a biological and abiological system are too different; one is limited by chemistry and evolution, the other by the speed of light.

If we’re lucky it will turn out that there really is some fundamental obstacle to building sentient machines, and it will take centuries or millennia to build one. In that case there’s presumably some other explanation for the Fermi paradox.

Luck would be good, but is there anything else we can do? Bill Joy advocates banning work on sentient machines, but even before 911 basic market forces would make a ban untenable. Now we have security concerns, and projects like TIA Systems will move us along just a little bit faster.

I think of humanity like a man swept out to sea. The tide is too strong to oppose, but the swimmer suspects there’s a spit of land they can grab on the way to the ocean. They swim with the current, at an angle to the full force of it, hoping to hit the last land before Tahiti.

We can’t really oppose the forces that are pushing us this way. Even if we could stop work in this area, we are in such peril from social, ecological, political and economic crises that we probably can’t maintain our civilization without substantial scientific and technologic breakthroughs. We’re addicted to progress, and it’s too late for us to stop.

Maybe on the way out to sea we can hit some land. We might assume we’ll create our replacements, but seek to create them in a certain way — merciful, mildly sentimental, and with a gentle sense of humor. [5]

For Faughnan’s awesome additional links head to the original: http://www.faughnan.com/setifail.html

For everyone else feeling the lack of colour and graphics here’s John and Yoko during their Bed-In… I realize it’s black and white, but it can’t be helped: