Blog

Archive for the ‘cybercrime/malcode’ category: Page 217

Nov 14, 2013

The Disruptional Singularity

Posted by in categories: business, climatology, complex systems, cosmology, counterterrorism, cybercrime/malcode, defense, economics, education, engineering, ethics, existential risks, finance, futurism, nanotechnology, physics, policy, robotics/AI, science, singularity, supercomputing, sustainability, transparency

(Excerpt)

Beyond the managerial challenges (downside risks) presented by the exponential technologies as it is understood in the Technological Singularity and its inherent futuristic forces impacting the present and the future now, there are also some grave global risks that many forms of management have to tackle with immediately.

These grave global risks have nothing to do with advanced science or technology. Many of these hazards stem from nature and some are, as well, man made.

For instance, these grave global risks ─ embodying the Disruptional Singularity ─ are geological, climatological, political, geopolitical, demographic, social, economic, financial, legal and environmental, among others. The Disruptional Singularity’s major risks are gravely threatening us right now, not later.

Read the full document at http://lnkd.in/bYP2nDC

Jun 5, 2013

The Impending Crisis of Data: Do We Need a Constitution of Information?

Posted by in categories: cybercrime/malcode, information science, media & arts

The recent scandal involving the surveillance of the Associated Press and Fox News by the United States Justice Department has focused attention on the erosion of privacy and freedom of speech in recent years. But before we simply attribute these events to the ethical failings of Attorney General Eric Holder and his staff, we also should consider the technological revolution powering this incident, and thousands like it. It would appear that bureaucrats simply are seduced by the ease with which information can be gathered and manipulated. At the rate that technologies for the collection and fabrication of information are evolving, what is now available to law enforcement and intelligence agencies in the United States, and around the world, will soon be available to individuals and small groups.

We must come to terms with the current information revolution and take the first steps to form global institutions that will assure that our society, and our governments, can continue to function through this chaotic and disconcerting period. The exponential increase in the power of computers will mean that changes the go far beyond the limits of slow-moving human government. We will need to build new institutions to the crisis that are substantial and long-term. It will not be a matter that can be solved by adding a new division to Homeland Security or Google.

We do not have any choice. To make light of the crisis means allowing shadowy organizations to usurp for themselves immense power through the collection and distortion of information. Failure to keep up with technological change in an institutional sense will mean that in the future government will be at best a symbolic façade of authority with little authority or capacity to respond to the threats of information manipulation. In the worst case scenario, corporations and government agencies could degenerate into warring factions, a new form of feudalism in which invisible forces use their control of information to wage murky wars for global domination.

No degree of moral propriety among public servants, or corporate leaders, can stop the explosion of spying and the propagation of false information that we will witness over the next decade. The most significant factor behind this development will be Moore’s Law which stipulates that the number of microprocessors that can be placed economically on a chip will double every 18 months (and the cost of storage has halved every 14 months) — and not the moral decline of citizens. This exponential increase in our capability to gather, store, share, alter and fabricate information of every form will offer tremendous opportunities for the development of new technologies. But the rate of change of computational power is so much faster than the rate at which human institutions can adapt — let alone the rate at which the human species evolves — that we will face devastating existential challenges to human civilization.

Continue reading “The Impending Crisis of Data: Do We Need a Constitution of Information?” »

May 16, 2013

“Proposal for a Constitution of Information” from the Asia Institute

Posted by in categories: complex systems, cybercrime/malcode, transparency

AI logo small

Asia Institute Report

Proposal for a Constitution of Information
March 3, 2013
Emanuel Pastreich

Introduction

Continue reading “"Proposal for a Constitution of Information" from the Asia Institute” »

Apr 19, 2013

Bitcoin’s Dystopian Future

Posted by in categories: bitcoin, cybercrime/malcode, economics, ethics, finance, futurism, information science, lifeboat, open source, policy

I have seen the future of Bitcoin, and it is bleak.


The Promise of Bitcoin

If you were to peek into my bedroom at night (please don’t), there’s a good chance you would see my wife sleeping soundly while I stare at the ceiling, running thought experiments about where Bitcoin is going. Like many other people, I have come to the conclusion that distributed currencies like Bitcoin are going to eventually be recognized as the most important technological innovation of the decade, if not the century. It seems clear to me that the rise of distributed currencies presents the biggest (and riskiest) investment opportunity I am likely to see in my lifetime; perhaps in a thousand lifetimes. It is critically important to understand where Bitcoin is going, and I am determined to do so.

(more…)

Sep 9, 2012

The Recurring Parable of the AWOL Android

Posted by in categories: cybercrime/malcode, defense, ethics, media & arts, military, robotics/AI

Greetings to the Lifeboat Foundation community and blog readers! I’m Reno J. Tibke, creator of Anthrobotic.com and new advisory board member. This is my inaugural post, and I’m honored to be here and grateful for the opportunity to contribute a somewhat… different voice to technology coverage and commentary. Thanks for reading.

This Here Battle Droid’s Gone Haywire
There’s a new semi-indy sci-fi web series up: DR0NE. After one episode, it’s looking pretty clear that the series is most likely going to explore shenanigans that invariably crop up when we start using semi-autonomous drones/robots to do some serious destruction & murdering. Episode 1 is pretty and well made, and stars 237, the android pictured above looking a lot like Abe Sapien’s battle exoskeleton. Active duty drones here in realityland are not yet humanoid, but now that militaries, law enforcement, the USDA, private companies, and even citizens are seriously ramping up drone usage by land, air, and sea, the subject is timely and watching this fiction is totally recommended.

(Update: DR0NE, Episode 2 now available)

Continue reading “The Recurring Parable of the AWOL Android” »

Aug 19, 2012

Artilects Soon to Come

Posted by in categories: complex systems, counterterrorism, cybercrime/malcode, defense, engineering, ethics, events, evolution, existential risks, futurism, information science, military, neuroscience, supercomputing

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

Jan 20, 2012

Old UNIX/IBM control systems: Potential time bombs in Industry

Posted by in categories: cybercrime/malcode, defense, events, existential risks, military, nuclear energy

It may be a point of little attention, as the millennium bug came with a lot of hoo-ha and went out with a whimper, but the impact it had on business was small because of all the hoo-ha, not in spite of it. And so it is with some concern that I consider operating system rollover dates as a potential hazard by software malfunction at major industrial operations such as nuclear power stations and warhead controls, which in worst case scenario, could of course have disastrous implications due to out-dated control systems.

The main dates of interest are 19 January 2038 by when all 32-bit Unix operating systems need to have been replaced by at least their 64-bit equivalents, and 17 Sept 2042 when IBM mainframes that use a 64-bit count need to be phased out.

Scare mongering? Perhaps not. While all modern facilities will have the superior time representation, I question if facilities built in the 70s and 80s, in particular those behind the old iron curtain were or ever will be upgraded. This raises a concern that for example the old soviet nuclear arsenal could become a major global threat within a few decades by malfunction if not decommissioned or control systems upgraded. It is one thing for a bank statement to print the date wrong on your latest bill due to millennium bug type issues, but if automated fault tolerance procedures have coding such as ‘if(time1 > time2+N) then initiate counter-measures’ then that is quite a different matter entirely.

I believe this is a topic which warrants higher profile lest it be forgot. Fortunately the global community has a few decades on its hands to handle this particular issue, though all it takes is just one un-cooperative facility to take such a risk rather than perform the upgrades necessary to ensure no such ‘meltdowns’ occur. Tick-tock, tick-tock, tick-tock…

Nov 9, 2010

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Posted by in categories: cybercrime/malcode, ethics, existential risks, futurism, robotics/AI

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Continue reading “The Singularity Hypothesis: A Scientific and Philosophical Assessment” »

Jul 30, 2010

Robots And Privacy

Posted by in categories: cybercrime/malcode, ethics, robotics/AI

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Continue reading “Robots And Privacy” »

Jun 2, 2010

New Terrorism: Five days in Manhattan

Posted by in categories: counterterrorism, cybercrime/malcode, defense, finance

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

Continue reading “New Terrorism: Five days in Manhattan” »