Toggle light / dark theme

If nothing else, Japan’s recent tragedy has brought the risk of current nuclear power plants back into focus. While it’s far to early to tell just how grave the Fukushima situation truly is, it is obvious that our best laid plans are inadequate as they relate to engineering facilities to withstand cataclysmic scale events.

Few places on the globe are as well prepared as Japan for earthquakes and the possibility of subsequent tsunamis. However, in spite of their preparedness — which was evidenced by the remarkably small number of casualties given the nature of the events that took place (can you imagine how many people would have perished had this same disaster struck somewhere else in the world?) — Japan’s ability to manage a damaged nuclear power plant was severely compromised.

As frightening as Japan’s situation is, what ought to frighten us even more is that there are many more nuclear power plants in equally vulnerable locations all over the globe. In California, for example, both the San Onofre and Diablo Canyon facilities are right on the coast (they both use ocean water for cooling) and the Diablo Canyon facility in particular is perilously close to a major fault.

Given what we’ve seen in Japan, the widely varying degrees of preparedness around the world, the age of many of the existing power plants and the consequences for even a single catastrophic containment failure, shouldn’t we be taking a long, hard look at nuclear power as a viable means of providing energy for the planet? Have we learned so little from Three Mile Island, Chernobyl, and now Fukushima? Just how capable are we [really] of dealing with a second, a third or a fourth disaster of this type? (and what if they were to happen simultaneously?) With so many existential risks completely beyond our control, does it make sense to add another one when there are other, lower risk alternatives to nuclear energy within our reach?

Below is a Pearltree documenting the situation and management of the damaged Fukushima reactors. Obviously, the news is grave, but imagine if this same situation had transpired in Chile.
Fukushima Nuclear Plants

NOTE: to see the contents of any of the links in this pearltree, just mouse-over the pearl. To see the whole page, simply click it.

Forgive me my courage because I like you. The catastrophe at Fukushima is a testimony to human fallibility. For 3 years, an analogous trap has been opened up for the planet as a whole, but no one believes my proof: An 8-percent probability of the planet being shrunk to 2 cm in perhaps 5 years’ time if the LHC experiment at Geneva is continued.

500 planetary newspapers reported on my warnings in 2008, before the experiment fizzled. After it got resumed in 2010, there is a press curfew worldwide. Even the appeal by a court to at long last admit the scientific safety conference called for (my only request) is quietly skirted by CERN. Just finding out about the truth is asking too much.

The reason is painful and has to do with Einstein and Japan (“I made one mistake in my life,” he said). No one believes any more that he was even greater in his youth. This is what I found out: The famous equivalence principle (between gravity and horizontal acceleration) of 1907 is even more powerful than known. What is known is that clocks tick more slowly further down in gravity, as he proved. But this time-change result stands not alone: Length and mass and charge are equally affected (TeLeMaCh theorem, for T, L, M, Ch) as is easy to prove. Hence gravity is much more powerful than anticipated. Black holes have radically different properties, for example. And black holes are being tried to be generated at the LHC.

Why the planet-wide press curfew since 2008? Apparently a nobelist cooperating with the LHC gave the parole that the new result is “absolute nonsense,” which would be gratifying to believe if true. But the press for some reason forgot to ask back: “Did you discuss your counterclaim with the author?” (No.) “Has anyone proved it?” (No.) “Is anyone ready to defend it publicly in dialog?” (No.)

So this is the largest possible accident to occur to the trustworthiness of the media. The price to pay by everyone is lack of protection for the planet. Einstein’s life was crushed after the atomic bomb. Can the planet expect to be saved by his “happiest thought” as he always called the equivalence principle? I tremble asking you this question.

For J.O.R. (March 27, 2011)

The field of life extension is broad and ranges from regenerative medicine to disease prevention by nutritional supplements and phytomedicine. Although the relevance of longevity and disease prevention to existential risks is less apparent than the prevention of large-scale catastrophic scenarios, it does have a high relevance to the future of our society. The development of healthy longevity and the efficiency of modern medicine in treating age-related diseases and the question of how well we can handle upcoming issues related to public health will have a major impact on our short-term future in the next few decades. Therefore, the prospect of healthy life extension plays important roles at both a personal and a societal level.
From a personal perspective, a longevity-compatible lifestyle, nutrition and supplementary regimen may not only help us to be active and to live longer, but optimizing our health and fitness also increase our energy, mental performance and capacities for social interaction. This aids our ability to work on the increasingly complex tasks of a 21st-century world that can make a positive impact in society, such as work on existential risk awareness and problem-solving. Recently, I wrote a basic personal orientation on the dietary supplement aspect of basic life extension with an audience of transhumanists, technology advocates with a high future shock level and open-minded scientists in mind, which is available here.
On a societal level, however, aging population and public health issues are serious. A rapid increase of some diseases of civilization, whose prevalence also climbs rapidly with advanced age, is on the march. For example, Type-II-Diabetes is rapidly on its way to becoming an insurmountable problem for China and the WHO projects COPD, the chronic lung disease caused by smoking and pollution, as the third leading cause of death in 2030.
While the currently accelerating increase of diseases of civilization may not collapse society itself, the costs associated with an overaging population could significantly damage societal order, collapse health systems and impact economies given the presently insufficient state of medicine and prevention. The magnitude, urgency and broad spectrum of consequences of age-related diseases of civilization currently being on the march is captured very well in this 5-minute fact-filled presentation on serious upcoming issues of aging in our society today by the LifeStar Foundation. Viewing is highly recommended. In short, a full-blown health crisis appears to be looming over many western countries, including the US, due to the high prevalence of diseases of aging in a growing population. This may require more resources than available if disease prevention efforts are not stepped up as early as possible. In that case, the required urgent action to deal with such a crisis may deprive other technological sectors of time and resources, affecting organizations and governments, including their capacity to manage vital infrastructure, existential risks and planning for a safe and sufficient progress of technology. Hence, not caring about the major upcoming health issue by stepping up disease prevention efforts according to latest biomedical knowledge may indirectly pose challenges affecting our capabilities to handle existential risks.
It should be pointed out that not all measures aimed at improving public health and medicine need to be complex or expensive to attain, as even existing biomedical knowledge is not sufficiently applied. A major example for this is the epidemic Vitamin D deficiency of the western population which was uncovered several years ago. In the last few years, the range of diseases that Vitamin D deficiency and –therapy can influence has grown to include most cancers, diabetes, cardiovascular diseases, brain aging including Alzheimer’s disease and many infectious diseases. Ironically, Vitamin D is one of the cheapest supplements available. Moreover, correcting an existing Vitamin D deficiency, which may affect as much as 80% of western population, may cut mortality risk in half. The related mortality decrease would likely coincide with a reduced morbidity and illness of elderly people, resulting in large savings of public healthcare and hospital funds, since Vitamin D effectively prevents and treats some of the most costly age-related diseases. The Life Extension Foundation, for example, has already offered a free initial supply to the U.S. population and shown that massive healthcare costs (and many lives) could be saved if every hospitalized patient was tested for Vitamin D and/or given the supplement, however this offer was ignored by the US government. This is detailed in an article on the effects of widespread Vitamin D deficiency from the Life Extension Foundation, along with many references for the above health effects of Vitamin D at the end of that article.
To recapitulate, there are plenty of important reasons why the focus on disease prevention and regenerative medicine, by applying existing state-of-the-art biomedical knowledge, as well as advancing key areas such as stem-cell research, rejuvenation technologies and nanomedicine should be an urgent priority for advocates of existential risk management today and during the next few decades.

I deeply feel with the Japanese victims of a lack of human caution regarding nuclear reactors. Is it compatible with this atonement if I desperately ask the victims to speak up with me against the next consciously incurred catastrophe made in Switzerland? If the proof of danger stays un-disproved, CERN is currently about to melt the earth’s mantle along with its core down to a 2-cm black hole in perhaps 5 years time at a probability of 8 percent. A million nuclear power plants pale before the “European Centre for Nuclear Research.” CERN must not be allowed to go on shunning the scientific safety conference sternly advised by a Cologne court only six weeks ago.

I thank Lifeboat for distributing this message worldwide.

1) Mini black holes are both non-evaporating and uncharged.

2) The new unchargedness makes them much more likely to arise in the LHC (since electrons are no longer point-shaped in confirmation of string theory).

3) When stuck inside matter, mini black holes grow exponentially as “miniquasars” to shrink earth to 2 cm in perhaps 5 years time.

4) They go undetected by CERN’s detectors.

5) They cannot eat neutron star cores (CERN’s life insurance argument is misleading).

For almost one year, CERN tries to produce them. Last week, CERN resumed operation while openly shunning the scientific safety conference publicly demanded for three years and most recently advised by a Cologne court.
The world’s media who do not believe that CERN betrays them are encouraged to ask one constructive question: Is a single physics nobelist ready to disprove one of the above 5 points?

Ref.
i) O.E. Rossler, “Abraham-solution to Schwarzschild metric implies that CERN miniblack holes pose a planetary risk,” in: Vernetzte Wissenschaften — Crosslinks in Natural and Social Sciences (P.J. Plath and E.C. Hass, eds.), pp. 263–270. Logos-Verlag Berlin, July 2008; online:
http://www.wissensnavigator.com/documents/ottoroesslerminiblackhole.pdf
ii) O.E. Rossler, “A rational and moral and spiritual dilemma,” in: Personal and Spiritual Development in the World of Cultural Diversity,
Vol. 5 (G.E,., Lasker and K. Hiwaki, eds.), pp.61–66. Int. Inst. Adv. Stud. Systems Research and Cybernetics, Tecumseh, July 2008; online:
http://www.wissensnavigator.com/documents/spiritualottoeroessler.pdf
iii) O.E. Rossler, TeLeMaCh theorem, http://www.wissensnavigator.com/documents/einsteins-equivale…t-l-m-.pdf

Ray Kurzweil is unique for having seen the unstoppable exponential growth of the computer revolution and extrapolating it correctly towards the attainment of a point which he called “singularity” and projects about 50 years into the future. At that point, the brain power of all human beings combined will be surpassed by the digital revolution.

The theory of the singularity has two flaws: a reparable and a hopefully not irreparable one. The repairable one has to do with the different use humans make of their brains compared to that of all animals on earth and presumably the universe. This special use can, however, be clearly defined and because of its preciousness be exported. This idea of “galactic export” makes Kurzweil’s program even more attractive.

The second drawback is nothing Ray Kurzweil has anything to do with, being entirely the fault of the rest of humankind: The half century that the singularity still needs to be reached may not be available any more.

The reason for that is CERN. Even though presented in time with published proofs that its proton-colliding experiment will with a probability of 8 percent produce a resident exponentially growing mini black hole eating earth inside out in perhaps 5 years time, CERN prefers not to quote those results or try and dismantle them before acting. Even the call by an administrative court (Cologne) to convene the overdue scientific safety conference before continuing was ignored when CERN re-ignited the machine a week ago.

This is most interesting news for singularity theorists. The majority of the currently living population of planet earth is unable to “think exponentially.” Can Ray Kurzweil or Lifeboat or the Singularity University somehow entice CERN into dialog before it is too late?

For J.O.R. (March 10, 2011)

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

This appreciation is vital for the planet — before the LHC machine at CERN will be re-ignited within a matter of days. No one at CERN disputes that the finding radically alters the safety equation. They only claim that the result is “absolute nonsense” and not even worth being discussed publicly.

CERN says “zero risk” of the planet getting shrunk to 2 cm in perhaps five years time — I say “8 percent risk” if the machine continues. This clearly deserves a mediating conference — as a judge strongly advised CERN on January 27, 2011 at a court hearing in Cologne, Germany (13 K 5693/08).

To insist on clarification about the “ultimate slow bomb at CERN” is a logical necessity. Is any couple in love or any parent NOT joining me in demanding the public safety conference before it is too late?

Otto E. Rossler, chaos researcher, University of Tubingen, Germany (For J.O.R.)

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Black holes are different than previously thought and still presupposed by experimentalists. It is much as it was with the case with the Eniwetak hydrogen bomb test where incorrect physical calculations caused a catastrophe — fortunately a localized one at the time. Four Tubingen theorems (gothic-R theorem, TeLeMaCh theorem, miniquasar theorem, superfluidity theorem) entail 10 new consequences:

1) Black holes DO NOT EVAPORATE — hence they can only grow.

2) Artificial black holes generated at the LHC thus are undetectable at first.

3) Black holes are uncharged, so the faster majority pass right through the earth’s and the sun’s matter.

4) Only the slowest artificial ones — below 11 km/sec — will stay inside earth.

5) Inside matter, a resident black hole will not grow linerally but rather — via self-organization — form a so-called “miniquasar”: an electro-gravitational engine that grows exponentially, hence shrinking the earth to 2 cm in a few years time.

6) Since black holes are uncharged, charged elementary particles conversely can no longer be maximally small (“point-shaped”). Hence space is “bored open” in the small as predicted by the string and loop theories.

7) Therefore, the probability of black holes being generated by the LHC experiment is heavily boosted up to about 10 percent at the energy of 7 and (planned soon) 8 TeV.

8) This high probability was apparently not yet reached in 2010, since the originally planned cumulative luminosity was not achieved. But the higher-energetic second phase of proton collisions, scheduled to start in February 2011, is bound to reach that level.

9) Black holes produced in natural particle collisions (cosmic ray protons colliding with surface protons of celestial bodies including earth) are much too fast to get stuck inside matter and hence are innocuous.

10) The only exception is ultra-dense neutron stars. However, their super-fluid “core” is frictionless by virtue of quantum mechanics. Ultra-fast mini black holes that get stuck in the “crust” can grow there only to a limited weight before sinking into the core — where they stop growing. Hence the empirical persistence of neutron stars is NOT a safety guarantee as CERN claims.

MAIN QUESTION: Why do the CERN representatives disregard the above results? (Ten possible reasons)

1, The novelty of those results.

2, The limited dissemination of the above results. So far, only three pertinent papers have appeared in print, two in conference proceedings in July 2008 and one in an online science journal in 2010. CERN never quoted these results sent to it first as preprints, in its “safety reports” (never updated for two and a half years). The more recent relevant results are still confined to the Internet.

3, The a priori improbability that several results stemming from independent areas of science would “conspire” to form a threat rather than cancel out in this respect. There seems to be no historical precedent for this.

4, The decades-long intervals between new results in general relativity make sure that new findings meet with maximum skepticism at first.

5, One finding — the unchargedness result (Ch in TeLeMaCh) — dethrones a two centuries old physical law, that of charge conservation.

6, The fact that the large planetary community of string theorists suddenly hold an “almost too good” result in their hands paradoxically causes them to keep a low profile rather than triumph.

7, The waned spirit of progress in fundamental physics after its results too often proved to be “Greek gifts.”

8, The LHC experiment is the largest and most tightly knit collective scientific effort of history.

9, A fear to lose sponsors and political support for subsequent mega-projects if admitting a potential safety gap.

10, The world-wide adoption of high-school type undergraduate curricula in place of the previous self-responsible style of studying, which has the side effect that collective authority acquires an undue weight.

SOCIETY’S FAILURE

Why has the “scientific safety conference,” publicly demanded on April 18, 2008, not been taken up by any grouping on the planet? Nothing but FALSIFICATION of the presented scientific results was and is being asked. Falsification of a single one will wipe out the danger. A week of discussing might suffice to reach a consensus.

Neither politics nor the media have realized up until now that not a single visible scientist on the planet assumes responsibility for the alleged falsity of the results presented. in particular, no individual stands up to defend his disproved counterclaims (the number of specialists who entered the ring in the first place can be counted on one hand). This simple fact — not a single open adversary — escaped the attention of a media person or politician up until now.

Neither group dares confront a worldwide interest lobby even though it is not money for once that is at stake but only borrowed authority. Almost so as if the grand old men of science of the 20th century had left no successors nor had the gifted philosophers and writers (I exempt Paul Virilio). Bringing oneself up-to-date on a given topic paradoxically seems impaired in the age of the Internet.

Thus there are no culprits? None except for myself who wrongly thought that painful words (like “risk of planetocaust”) could have a wake-up effect at the last moment. The real reason for the delayed global awakening to the danger may lie with this communication error made by someone who knows how it is to lose a child. In the second place, my personal friends Lorenz, von Weizsacker, Wheeler and DeWitt are no longer among us.

CONCLUSIONS

I therefore appeal to the above called-upon high legal and political bodies to rapidly rule that the long overdue scientific safety conference take place before the LHC experiment is allowed to resume in mid-February 2011. Or in the case of a delay of the conference beyond that date, to prohibit resumption of the experiment before the
conference has taken place.

I reckon with the fact that I will make a terrible fool of myself if at long last a scientist succeeds in falsifying a single one of the above 10 scientific findings (or 4 theorems). This is my risk and my hope at the same time. I ask the world’s forgiveness for my insisting that my possibly deficient state of knowledge be set straight before the largest experiment of history can continue.

However, the youngest ship’s boy in the crow’s nest who believes he recognizes something on the horizon has the acknowledged duty to insist on his getting a hearing. I humbly ask the high bodies mentioned not to hold this fact against me and to rule in accordance with my proposition: First clarification, then continuation. Otherwise, it would be madness even if in retrospect it proved innocuous. Would it not?

Sincerely yours,

Otto E. Rössler, Chaos Researcher
2011/01/14
(For J.O.R.)


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark