Blog

Archive for the ‘futurism’ category: Page 1173

Sep 20, 2014

FUTURISM UPDATE (September 21, 2014)

Posted by in category: futurism

FUTURISM UPDATE (September 21, 2014)

a Amazon and Lifeboat

GE INTELLIGENT PLATFORMS: High-performance Automation for a Connected World http://www.ge-ip.com/future-of-industrial-automation

CNN: View from Russia: Could Putin be in Obama’s ‘broad coalition’ against ISIS? http://edition.cnn.com/2014/09/18/opinion/isis-obama-putin-n…?hpt=hp_c1

Continue reading “FUTURISM UPDATE (September 21, 2014)” »

Sep 19, 2014

FUTURISM UPDATE (September 20, 2014)

Posted by in category: futurism

a Amazon and Lifeboat

FUTURISM UPDATE (September 20, 2014)

DARPA: President Obama Highlights New DARPA Program Aimed at Developing Novel Therapies Customized to Individual Patients http://www.darpa.mil/NewsEvents/Releases/2014/08/26.aspx

Rising Tides – Mitigation versus Consternation http://www.21stcentech.com/rising-tides-mitigation-consternation/

Continue reading “FUTURISM UPDATE (September 20, 2014)” »

Sep 18, 2014

FUTURISM UPDATE (September 19, 2014)

Posted by in category: futurism

0   a     Y E L L O W
FUTURISM UPDATE (September 19, 2014)

WALL STREET JOURNAL: It’s Time to Take Artificial Intelligence Seriously. No Longer an Academic Curiosity, It Now Has Measurable Impact on Our Lives http://online.wsj.com/articles/its-time-to-take-artificial-i…1408922644

Blue Origin to Replace Russia’s RD-180 for U.S. Rocket Flights http://www.21stcentech.com/blue-origin-replace-russias-rd-18…t-flights/

CBSNEWS: Popular Science picks best inventions for 2014 http://www.cbsnews.com/videos/popular-science-picks-best-inventions-for-2014/

Continue reading “FUTURISM UPDATE (September 19, 2014)” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Sep 13, 2014

Century 21st Adaptability Into Outright Success, Regardless of Hugely Indebted Nations and a Global Economy Disrupted to Its Knees! (Image)

Posted by in category: futurism

Century 21st Adaptability Into Outright Success, Regardless of Hugely Indebted Nations and a Global Economy Disrupted to Its Knees! (Image)

0  ACROBATICS

Sep 13, 2014

The Three Simultaneous Foresight Imperatives Into Victory! [Graphic]

Posted by in category: futurism

0    FORESIGHT

Sep 9, 2014

Jobs Versus Bots!

Posted by in category: futurism

Jobs Versus Bots!

0  Jobs Versus Bots

Sep 9, 2014

Second Guessing the State of the Morrow, Incorrectly!

Posted by in category: futurism

Second Guessing the State of the Morrow, Incorrectly!

0  COMMON SENSE

Sep 9, 2014

Fun Info-graphic of 2014 Society! 2014 Western Caste System! (Some Serious Researched Books Argued)

Posted by in category: futurism

0   Outliers

Sep 8, 2014

Getting Fortune 500 Prospective Client’s Cash, Continually and Successfully!

Posted by in category: futurism

Getting Fortune 500 Prospective Client’s Cash, Continually and Successfully!

0  wildest

HOW TO SUCCEED IN BUSINESS ACCORDING TO THESE COMPANIES:

Mitsubishi Motors, Honda, Daimler-Chrysler’s Mercedes-Benz, Toyota, Royal Dutch Shell, Google, Xerox, Exxon-Mobil, Boeing, Amazon, Procter & Gamble, NASA and DARPA, Lockheed Martin, RAND Corporation and HUDSON Institute, Northrop Grumman Corporation, etc.

Continue reading “Getting Fortune 500 Prospective Client's Cash, Continually and Successfully!” »