Comments on: Is it possible to build an artificial superintelligence without fully replicating the human brain? https://spanish.lifeboat.com/blog/2014/06/is-it-possible-to-build-an-artificial-superintelligence-without-fully-replicating-the-human-brain Safeguarding Humanity Tue, 25 Apr 2017 11:14:25 +0000 hourly 1 https://wordpress.org/?v=6.6.1 By: Stan Franklin https://spanish.lifeboat.com/blog/2014/06/is-it-possible-to-build-an-artificial-superintelligence-without-fully-replicating-the-human-brain#comment-209081 Mon, 02 Jun 2014 14:03:55 +0000 http://lifeboat.com/blog/?p=11460#comment-209081 I agree that an AGI needn’t be brain-like, but it is very likely to be mind like in the sense of being built on an architecture very like that of our LIDA model of cognition. For the very beginnings of an argument in that direction please see my keynote address at the AGIRI Workshop 2006 .

]]>
By: Andras Lorincz https://spanish.lifeboat.com/blog/2014/06/is-it-possible-to-build-an-artificial-superintelligence-without-fully-replicating-the-human-brain#comment-209027 Mon, 02 Jun 2014 07:30:37 +0000 http://lifeboat.com/blog/?p=11460#comment-209027 If we start from the No Free Lunch Theorem of Wolpert and Macready saying that taking all optimization problems no algorithm can work better than random search, we end up with a conclusion that either a random search machine is equivalent (or better ) to (than) human intelligence, or we are not working on all possible optimization problems. Indeed, we are living in a sustained system and all known such systems drive themselves to critical behavior, where large fluctuations are relatively frequent (the distribution is not Gaussian). This, with some additional handwaving, leads one to consider (sparsely interacting) events and, not surprisingly, sparse tools of AI belong to the most efficient set of learning algorithms. The neurally inspired algorithmic direction is now having strong mathematical support (called L1 Magic) from some of the best mathematicians, who may not even know about the efforts of scientists involved in neural computations.

I think that evolution developed an efficient algorithm that can deal with the statistics of sustained systems and that such algorithms can take many forms. Nonetheless, and from the engineering point of view, it is wise to keep an eye on the structure of the brain: the neural architecture of mammals and, in particular, the hippocampal-entorhinal complex, considered as the prototype of sensory processing areas of the neocortex could be of high interest.

Finally, let me take another direction for arguing: if feature extraction from big (and noisy) data would be solved, do you know anything relevant that we are missing for replicating (and overcoming) the cognitive (!) power of the brain? Because, I don’t.

]]>
By: Ted Howard NZ https://spanish.lifeboat.com/blog/2014/06/is-it-possible-to-build-an-artificial-superintelligence-without-fully-replicating-the-human-brain#comment-208986 Mon, 02 Jun 2014 03:02:02 +0000 http://lifeboat.com/blog/?p=11460#comment-208986 It has been clear to me since 1974 that the key to AGI is an efficient algorithm to allow abstraction of pattern from a mass of information.
Ray has developed several algorithms. Other groups are working on other algorithms.
To me it seems extremely dangerous to bring AGI to awareness prior to getting our own ethical house in order.
We need to be demonstrating by our dominant social systems that we value all sentient life, if we want to have a reasonably high probability of surviving AGI coming to awareness. It must be clear to anyone who thinks deeply about it that markets cannot deliver abundance to all, as markets always value true abundance at zero.
We need to remove market values from their current dominance in our society, and demonstrate that we value life and liberty for all sentient life.
We can easily give all on the planet the essentials of survival and the tools to educate themselves, and to whatever they responsibly choose.
Markets don’t do that.
Markets institutionalise insecurity, and devalue anything abundant.

If we want AGI to see us as friends, we need to start demonstrating cooperation at the highest levels.

While I largely agree with Ray, I also see extreme risks in our current societal direction.
We need to get some algorithms in place to empower high level cooperation delivering low level abundance to all, or we are putting ourselves at serious risk.

]]>
By: Andres Agostini https://spanish.lifeboat.com/blog/2014/06/is-it-possible-to-build-an-artificial-superintelligence-without-fully-replicating-the-human-brain#comment-208975 Mon, 02 Jun 2014 01:21:59 +0000 http://lifeboat.com/blog/?p=11460#comment-208975 This is absolutely relevant and most important. Thank you for this great content!

]]>