Blog

Archive for the ‘existential risks’ category: Page 141

Nov 9, 2010

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Posted by in categories: cybercrime/malcode, ethics, existential risks, futurism, robotics/AI

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Continue reading “The Singularity Hypothesis: A Scientific and Philosophical Assessment” »

Sep 2, 2010

Self Transcendence

Posted by in categories: ethics, existential risks, futurism

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

Continue reading “Self Transcendence” »

Jul 22, 2010

My book in Lulu

Posted by in category: existential risks

My book “STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century” is now available through Lulu http://www.lulu.com/product/paperback/structure-of-the-globa…y/11727068 But it also available free on scribd http://www.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CA…I-century– This book is intended to be complete up to date source book on information about existential risks.

Jun 26, 2010

Existential Risk Reduction Career Network

Posted by in categories: existential risks, finance, lifeboat

The existential risk reduction career network is a career network for those interested in getting a relatively well-paid job and donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risks, in the vein of SIAI, FHI, and the Lifeboat Foundation.

The aim is to foster a community of donors, and to allow donors and potential donors to give each other advice, particularly regarding the pros and cons of various careers, and for networking with like-minded others within industries. For example, someone already working in a large corporation could give a prospective donor advice about how to apply for a job.

Over time, it is hoped that the network will grow to a relatively large size, and that donations to existential risk-reduction from the network will make up a substantial fraction of funding for the beneficiary organizations.

In isolation, individuals may feel like existential risk is too large a problem to make a dent in, but collectively, we can make a huge difference. If you are interested in helping us make a difference, then please check out the network and request an invitation.

Please feel free to contact the organizers at [email protected] with any comments or questions.

Jun 25, 2010

Lifeboat Foundation in Games

Posted by in categories: existential risks, fun

The RPG Eclipse Phase includes the “Singularity Foundation” and “Lifeboat Institute” as player factions. Learn more about this game!

P.S. In case you don’t know, there is a Singularity Institute for Artificial Intelligence.


Eclipse Phase is a roleplaying game of post-apocalyptic transhuman conspiracy and horror.

An “eclipse phase” is the period between when a cell is infected by a virus and when the virus appears within the cell and transforms it. During this period, the cell does not appear to be infected, but it is.

Players take part in a cross-faction secret network dubbed Firewall that is dedicated to counteracting “existential risks” — threats to the existence of transhumanity, whether they be biowar plagues, self-replicating nanoswarms, nuclear proliferation, terrorists with WMDs, net-breaking computer attacks, rogue AIs, alien encounters, or anything else that could drive an already decimated transhumanity to extinction.

Jun 9, 2010

Have Corporations Become a Global Existential Threat?

Posted by in categories: business, ethics, existential risks

Perhaps you think I’m crazy or naive to pose this question. But more and more the past few months I’ve begun to wonder if there is a possibility here that this idea may not be too far off the mark.

Not because of some half-baked theory about a global conspiracy or anything of the sort but simply based upon the behavior of many multinational corporations recently and the effects this behavior is having upon people everywhere.

Again, you may disagree but my perspective on these financial giants is that they are essentially predatory in nature and that their prey is any dollar in commerce that they can possibly absorb. The problem is that for anyone in the modern or even quasi-modern world money is nearly as essential as plasma when it comes to our well-being.

It has been clearly demonstrated again and again — all over the world — that when a population has become sufficiently destitute that the survival of the individual is actually threatened violence inevitably occurs. On a large enough scale this sort of violence can erupt into civil war and wars, as we all know too well can spread like a virus across borders, even oceans.

Continue reading “Have Corporations Become a Global Existential Threat?” »

Jun 5, 2010

Friendly AI: What is it, and how can we foster it?

Posted by in categories: complex systems, ethics, existential risks, futurism, information science, policy, robotics/AI

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Continue reading “Friendly AI: What is it, and how can we foster it?” »

May 2, 2010

Nuclear Winter and Fire and Reducing Fire Risks to Cities

Posted by in categories: defense, existential risks, lifeboat, military, nuclear weapons

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” »

Apr 18, 2010

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

Posted by in categories: biological, biotech/medical, business, complex systems, education, events, existential risks, futurism, geopolitics, human trajectories, information science, media & arts, neuroscience, robotics/AI

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

Continue reading “Ray Kurzweil to keynote "H+ Summit @ Harvard — The Rise Of The Citizen Scientist"” »

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »