Blog

Jul 13, 2016

The Threats That Artificial Intelligence Researchers Actually Worry About

Posted by in category: robotics/AI

Slate summary of the debate on AI Safety:

Fears about A.I. take a very different form depending on whom you ask.

Some researchers think that the benefits of developing an A.G.I. far outweighv the risks, and the question of control is a moot point. Intelligent systems will be developed by humans and controlled by humans, so there is nothing to worry about.

A.I. safety researchers strongly disagree: There is no reason to believe that we will be able to control generally intelligent, let alone superintelligent, systems. In fact, we have no idea of what a world with (super)intelligent machines would be like. But we have at least a good idea of potentially harmful scenarios—like, say, an A.I. gaining unilateral control of all resources—so the earlier we start thinking about it, the better.

Read more

Comments are closed.