Toggle light / dark theme

Lately, media around the web has been bracing for robots — not time-traveling robots per se, but robot workers. Specifically, the increased sophistication of artificial intelligence and improved engineering of robotics has spurred a growing concern about what people are going to do when all the regular jobs are done by robots.

A variety of solutions have been proposed to this potential technological unemployment (we even had an entire Future of Work series dealing with this topic in March), many of which suggest that there will still be things that humans can do that robots can’t, but what are they? Read more

Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Posted in astronomy, big data, computing, cosmology, energy, engineering, environmental, ethics, existential risks, futurism, general relativity, governance, government, gravity, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, open source, particle physics, philosophy, physics, policy, posthumanism, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treatiesTagged , , , , , , , , , , , ,

Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Why the LHC must be shut down

Quoted: “Once you really solve a problem like direct brain-computer interface … when brains and computers can interact directly, to take just one example, that’s it, that’s the end of history, that’s the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can basically break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, as it’s often referred to, by definition, we have no way of even starting to imagine what’s happening beyond that.”

Read the article here > http://www.theamericanconservative.com/dreher/silicon-valley-mordor/

New Book: An Irreverent Singularity Funcyclopedia, by Mondo 2000’s R.U. Sirius.

Posted in 3D printing, alien life, automation, big data, bionic, bioprinting, biotech/medical, complex systems, computing, cosmology, cryptocurrencies, cybercrime/malcode, cyborgs, defense, disruptive technology, DNA, driverless cars, drones, economics, electronics, encryption, energy, engineering, entertainment, environmental, ethics, existential risks, exoskeleton, finance, first contact, food, fun, futurism, general relativity, genetics, hacking, hardware, human trajectories, information science, innovation, internet, life extension, media & arts, military, mobile phones, nanotechnology, neuroscience, nuclear weapons, posthumanism, privacy, quantum physics, robotics/AI, science, security, singularity, software, solar power, space, space travel, supercomputing, time travel, transhumanism

Quoted: “Legendary cyberculture icon (and iconoclast) R.U. Sirius and Jay Cornell have written a delicious funcyclopedia of the Singularity, transhumanism, and radical futurism, just published on January 1.” And: “The book, “Transcendence – The Disinformation Encyclopedia of Transhumanism and the Singularity,” is a collection of alphabetically-ordered short chapters about artificial intelligence, cognitive science, genomics, information technology, nanotechnology, neuroscience, space exploration, synthetic biology, robotics, and virtual worlds. Entries range from Cloning and Cyborg Feminism to Designer Babies and Memory-Editing Drugs.” And: “If you are young and don’t remember the 1980s you should know that, before Wired magazine, the cyberculture magazine Mondo 2000 edited by R.U. Sirius covered dangerous hacking, new media and cyberpunk topics such as virtual reality and smart drugs, with an anarchic and subversive slant. As it often happens the more sedate Wired, a watered-down later version of Mondo 2000, was much more successful and went mainstream.”

Read the article here >https://hacked.com/irreverent-singularity-funcyclopedia-mondo-2000s-r-u-sirius/

This archive file was compiled from an interview conducted at the Googleplex in Mountain View, California, 2013. In the discussion, Amit Singhal, a key figure in the evolution of Google’s search engine, broadly outlined the significant hurdles that stood in the way of achieving one of his long-held dreams — creating a true ‘conversational’ search engine. He also sketched out a vision of how the initial versions of such a system would, and also importantly, would not attempt to assist the individuals that it interacted with.

Though the vision was by design more limited and focused than a system capable of passing the famous Turing test, it nonetheless raised stimulating questions about the future relationships of humans and their ‘artificial’ assistants.

More about Amit Singhal:

Wikipedia:
en.wikipedia.org/wiki/Amit_Singhal

Google Search:
en.wikipedia.org/wiki/Google_Search

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.

Cheers!

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

AND FOR INSTANCE:

“… Technology is the savior for everything. That’s the point of this course. Technology is accelerating, everything is going to be good, technology is your friend … I think that’s a load of crap …” By Dr. Jonathan White

Back to the White Swan hardcore:

That discourse can be entertained at a forthcoming Renaissance, not now. Going against this idea will be outrageously counterproductive to ascertain the non-annihilation of Earth’s locals.

People who destroy, eternally beforehand, outrageous Black Swans, engaging into super-natural and preter-natural preparations for known and unknown Outliers, thus observing — in all practicality — the successful and prevailing White Swan and Transformative and Integrative Risk Management interdisciplinary problem-solving methodology, include:

(1.-) Sir Martin Rees PhD (cosmologist and astrophysicist), Astronomer Royal, Cambridge University Professor and former Royal Society President.

(2.-) Dr. Stephen William Hawking CH CBE FRS FRSA is an English theoretical physicist, cosmologist, author and Director of Research at the Centre for Theoretical Cosmology within the University of Cambridge. Formerly: Lucasian Professor of Mathematics at the University of Cambridge.

(3.-) Prof. Nick Bostrom Ph.D. is a Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, the reversal test, and consequentialism. He holds a PhD from the London School of Economics (2000). He is the founding director of both The Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.

(4.-) The US National Intelligence Council (NIC) [.…] The National Intelligence Council supports the Director of National Intelligence in his role as head of the Intelligence Community (IC) and is the IC’s center for long-term strategic analysis [.…] Since its establishment in 1979, the NIC has served as a bridge between the intelligence and policy communities, a source of deep substantive expertise on intelligence issues, and a facilitator of Intelligence Community collaboration and outreach [.…] The NIC’s National Intelligence Officers — drawn from government, academia, and the private sector—are the Intelligence Community’s senior experts on a range of regional and functional issues.

(5.-) U.S. Homeland Security’s FEMA (Federal Emergency Management Agency).

(6.-) The CIA or any other U.S. Government agencies.

(7.-) Stanford Research Institute (now SRI International).

(8.-) GBN (Global Business Network).

(9.-) Royal Dutch Shell.

(10.-) British Doomsday Preppers.

(11.-) Canadian Doomsday Preppers.

(12.-) Australian Doomsday Preppers

(13.-) American Doomsday Preppers.

(14.-) Disruptional Singularity Book (ASIN: B00KQOEYLG).

(15.-) Scientific Prophets of Doom at https://www.youtube.com/watch?v=9bUe2-7jjtY

White Swans are always getting prepared for Unknown and Known Outliers, and MOST FLUIDLY changing the theater of operation by permanently updating and upgrading the designated preparations.

Authored By Copyright Mr. Andres Agostini
White Swan Book Author
www.linkedin.com/in/andresagostini
www.amazon.com/author/Agostini

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

If the controversy over genetically modified organisms (GMOs) tells us something indisputable, it is this: GMO food products from corporations like Monsanto are suspected to endanger health. On the other hand, an individual’s right to genetically modify and even synthesize entire organisms as part of his dietary or medical regimen could someday be a human right.
The suspicion that agri-giant companies do harm by designing crops is legitimate, even if evidence of harmful GMOs is scant to absent. Based on their own priorities and actions, we should have no doubt that self-interested corporations disregard the rights and wellbeing of local producers and consumers. This makes agri-giants producing GMOs harmful and untrustworthy, regardless of whether individual GMO products are actually harmful.
Corporate interference in government of the sort opposed by the Occupy Movement is also connected with the GMO controversy, as the US government is accused of going to great lengths to protect “stakeholders” like Monsanto via the law. This makes the GMO controversy more of a business and political issue rather than a scientific one, as I argued in an essay published at the Institute for Ethics and Emerging Technologies (IEET). Attacks on science and scientists themselves over the GMO controversy are not justified, as the problem lies solely with a tiny handful of businessmen and corrupt politicians.
An emerging area that threatens to become as controversial as GMOs, if the American corporate stranglehold on innovation is allowed to shape its future, is synthetic biology. In his 2014 book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, top synthetic biologist J. Craig Venter offers powerful words supporting a future shaped by ubiquitous synthetic biology in our lives:

“I can imagine designing simple animal forms that provide novel sources of nutrients and pharmaceuticals, customizing human stem cells to regenerate a damaged, old, or sick body. There will also be new ways to enhance the human body as well, such as boosting intelligence, adapting it to new environments such as radiation levels encountered in space, rejuvenating worn-out muscles, and so on”

In his own words, Venter’s vision is no less than “a new phase of evolution” for humanity. It offers what Venter calls the “real prize”: a family of designer bacteria “tailored to deal with pollution or to absorb excess carbon dioxide or even meet future fuel needs”. Greater than this, the existing tools of synthetic biology are transhumanist in nature because they create limitless means for humans to enhance themselves to deal with harsher environments and extend their lifespans.
While there should be little public harm in the eventual ubiquity of the technologies and information required to construct synthetic life, the problems of corporate oligopoly and political lobbying are threatening synthetic biology’s future as much as they threaten other facets of human progress. The best chance for an outcome that will be maximally beneficial for the world relies on synthetic biology taking a radically different direction to GM. That alternative direction, of course, is an open source future for synthetic biology, as called for by Canadian futurist Andrew Hessel and others.
Calling himself a “catalyst for open-source synthetic biology”, Hessel is one of the growing number of experts who reject biotechnology’s excessive use of patents. Nature notes that his Pink Army Cooperative venture relies instead on “freely available software and biological parts that could be combined in innovative ways to create individualized cancer treatments — without the need for massive upfront investments or a thicket of protective patents”.
While offering some support to the necessity of patents, J. Craig Venter more importantly praises the annual International Genetically Engineered Machine (iGEM) competition in his book as a means of encouraging innovation. He specifically names the Registry of Standard Biological Parts, an open source library from which to obtain BioBricks, and describes this as instrumental for synthetic biology innovation. Likened to bricks of Lego that can be snapped together with ease by the builder, BioBricks are prepared standard pieces of genetic code, with which living cells can be newly equipped and operated as microscopic chemical factories. This has enabled students and small companies to reprogram life itself, taking part in new discoveries and innovations that would have otherwise been impossible without the direct supervision of the world’s best-trained teams of biologists.
There is a similar movement towards popular synthetic biology by the name of biohacking, promoted by such experts as Ellen Jorgensen. This compellingly matches the calls for greater autonomy for individuals and small companies in medicine and human enhancement. Unfortunately, despite their potential to greatly empower consumers and farmers, such developments have not yet found resonance with anti-GMO campaigners, whose outright rejection of biotechnology has been described as anti-science and “bio-luddite” by techno-progressives. It is for this reason that emphasizing the excellent potential of biotechnology for feeding and fuelling a world plagued by dwindling resources is important, and a focus on the ills of big business rather than imagined spectres emerging from science itself is vital.
The concerns of anti-GMO activists would be addressed better by offering support to an alternative in the form of “do-it-yourself” biotechnology, rather than rejecting sciences and industries that are already destined to be a fundamental part of humanity’s future. What needs to be made is a case for popular technology, in hope that we can reject the portrayal of all advanced technology as an ally of powerful states and corporations and instead unlock its future as a means of liberation from global exploitation and scarcity.
While there are strong arguments that current leading biotechnology companies feel more secure and perform better when they retain rigidly enforced intellectual property rights, Andrew Hessel rightly points out that the open source future is less about economic facts and figures than about culture. The truth is that there is a massive cultural transition taking place. We can see a growing hostility to patents, and an increasing popular enthusiasm for open source innovation, most promisingly among today’s internet-borne youth.
In describing a cultural transition, Hessel is acknowledging the importance of the emerging body of transnational youth whose only ideology is the claim that information wants to be free, and we find the same culture reflected in the values of organizations like WikiLeaks. Affecting every facet of science and technology, the elite of today’s youth are crying out for a more open, democratic, transparent and consumer-led future at every level.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published at h+ Magazine on 21 August 2014

Written By: — Singularity Hub
http://cdn.singularityhub.com/wp-content/uploads/2014/07/universe-comes-to-know-itself-1.jpg
In his latest video, host of National Geographic’s Brain Games and techno-poet, Jason Silva, explores the universe’s tendency to self-organize. Biology, he says, seems to have agency and directionality toward greater complexity, and humans are the peak.

“It’s like human beings seem to be the cutting edge,” Silva says. “The evolutionary pinnacle of self-awareness becoming aware of its becoming.”

Read more