Blog

Aug 21, 2007

Risks Not Worth Worrying About

Posted by in categories: defense, futurism, lifeboat

There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Some specific risks which can be safely ignored:

- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.

- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.

- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.

Some specific risks which should be given lower priority:

- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 105 to 107 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.

- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.

- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.

- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.

7

Comments — comments are now closed.


  1. Mike Johnson says:

    Tom,

    Great idea for a post. The point about using the foundation’s resources effectively is well-taken. It’d be interesting to hear about which risks you rate as the most important for the foundation to tackle.

    A long-winded nitpick: I feel you dismiss global climate change and terrorism as existential risks too quickly.

    On global climate change, you say “We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction.” True, but it appears past temperature increases may have been caused by different processes. Perhaps when warming is primarily caused by X, it involves a self-regulating cycle, but cause Y will result in a downward spiral. It’s difficult to make the analogy of “it’s happened before, so it’s nothing to worry about” based on temperature alone. And even relatively minor climate changes could destabilize parts of the world. Which could be very significant.

    On the terrorism angle, you state that “but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive.” I only partially agree. I think perhaps that terrorism as an institution can be productively viewed as a rational actor (as you note), but I don’t think it’s correct to view terrorists themselves as rational actors as a rule. They’re more like fire-and-forget missiles filled with an ideology then launched by different institutions/factions of terrorism. But there’s one key difference from fire-and-forget missiles: their payload can increase over time as technology moves forward.

    Today’s terrorism is calibrated on a certain level of technological capacity for destruction that an individual or small group may acquire: I fear that capacity may increase faster than the institution of terrorism can evolve to compensate for it (e.g., tone down their ideology of “cause as much destruction to the unbelievers as you can”). And once you release a terrorist into the wild, it’s hard to recall them. Some of the missiles being fired today, with an ideology calibrated for suicide bombings, may have the means to blow us all up in 20 years.

    Again, as you say– these things are very political and hard to change. But I wouldn’t rule them out as existential risks.

  2. Rolf Nelson says:

    When considering what problem to work on, one question of “how many other people are working on this problem”? If the answer is “a lot”, you may stay away because of the Law of Diminishing Returns. (This is only partly mitigated by the fact that if a lot of other people agree that P is an important problem and are working on it by doing S, that somewhat increases the chances that your assessment that “P is an important problem and S is a good solution” is correct.)

    In the course of figuring out where to spend resources, people and organizations like the Lifeboat Foundation are presumably tracking who else is working on what problems and how many resources are being spent by other organizations. Ideally, the Lifeboat Foundation should publish their order-of-magnitude estimates so that other people deciding what projects to work on can use that data as well.

    Self-interested people have the same problem, of course. If there are already five companies selling apple-flavored toothpaste, you might not want your startup to sell the same thing. If no one is selling apple-flavored toothpaste, you might consider selling it, with the caveat you’d first want to make sure there isn’t a *really good reason* why no company has attempted to sell apple-flavored toothpaste. The difference between self-interested people and an (ideal) nonprofit is that self-interested people have less incentive to share their research with each other.

  3. Brian Wang says:

    It would take little effort for the Lifeboat foundation to have a considered position on several of those risks and to make those positions public. If there are ways for the general robustness of the species and civilization to be improved then those would reduce the overall risks to existence. Another factor, is that we may be incorrect in risks assessments, just as Nasa was incorrect about the risks to losing the Space shuttle (until they actually lost one and then two). Risks that we believe are low may turn out to be far higher than we believed.

    ie. Global warming. Climate change bills should be adopted that would reduce pollution and global warming gases. I like the McCain/Lieberman climate stewardship bill. It could reduce coal usage from a projected 58% of electricity to as little as 11% if no international substitution is allowed.

    >Build up of air pollution
    This is something that can be addressed as well. Again the climate change bills before congress would help. Also, encouraging other nations to adopt policies to reduce pollution would be a good thing. It would reduce costs to the economy from health and business losses. This would mean more funds potentially available for other problems. Lifeboat just needs to support all plans where fossil fuel use is reduced and measures to clean up fossil fuel where it is used.

    So two items (global warming and air pollution), can be significantly impacted just by helping to enable the adoption of policies that are being considered now that would increase the cost of carbon energy sources and encourage nuclear and renewable power.

    Asteroid impact risks are also not well characterized. Completing a more complete survey of specific asteroid threats could identify an actual large threat. This could be a very affordable project. We do not need to trust indirect statistical evidence when we can determine actual risks. Another benefit would be useful information for science and for later space colonization and development projects.

  4. Tom McCabe says:

    “It would take little effort for the Lifeboat foundation to have a considered position on several of those risks and to make those positions public”

    It would have very little impact, other than to divert needed resources. People in government don’t listen to us; we’re not a lobbying group.

    “Climate change bills should be adopted that would reduce pollution and global warming gases.”

    How the heck are any of the transhumanist organizations going to get these bills passed? Al Gore can’t get these bills passed, and he’s an Oscar-winning filmmaker with prestige in government and millions of dollars in the bank.

    “It could reduce coal usage from a projected 58% of electricity to as little as 11% if no international substitution is allowed.”

    And, er, what’s going to substitute for the coal? People will not vote for a bill that leaves them shivering in the dark.

    “can be significantly impacted just by helping to enable the adoption of policies”

    People in government *do not listen to us*. We cannot just wave a magic wand and get these policies adopted. We can’t even work hard and spend thousands of dollars and get these policies adopted; other groups have worked much harder and spent much more money and achieved nothing.

    “Completing a more complete survey of specific asteroid threats could identify an actual large threat.”

    We know what the threat level is from historical impacts. It could turn out that there’s a 10 km asteroid on a collision course in ten years, but that has a probability of around one in fifty million.

  5. Allan Crossman says:

    “Any risk which is totally natural (could happen without human intervention), must be highly improbable”

    You’re going to run into difficult Anthropic Principle arguments (and related stuff) here. Especially given the tendency of populations to increase, it could be that the majority (or a substantial minority) of conscious observers in the universe live in lucky civilisations that just happen not to have been wiped out.

    (Think about 100 civilisations, each doubling in size each generation, and each with a 10% chance of being wiped out each generation. The total number of people continues to increase until the last civilisation is wiped out. Thus, most people who ever live think the danger is less than it really is.)

    Difficult stuff, I know.