Menu

Blog

Archive for the ‘existential risks’ category: Page 139

Mar 27, 2011

Short Speech to the World

Posted by in categories: existential risks, particle physics

Forgive me my courage because I like you. The catastrophe at Fukushima is a testimony to human fallibility. For 3 years, an analogous trap has been opened up for the planet as a whole, but no one believes my proof: An 8-percent probability of the planet being shrunk to 2 cm in perhaps 5 years’ time if the LHC experiment at Geneva is continued.

500 planetary newspapers reported on my warnings in 2008, before the experiment fizzled. After it got resumed in 2010, there is a press curfew worldwide. Even the appeal by a court to at long last admit the scientific safety conference called for (my only request) is quietly skirted by CERN. Just finding out about the truth is asking too much.

The reason is painful and has to do with Einstein and Japan (“I made one mistake in my life,” he said). No one believes any more that he was even greater in his youth. This is what I found out: The famous equivalence principle (between gravity and horizontal acceleration) of 1907 is even more powerful than known. What is known is that clocks tick more slowly further down in gravity, as he proved. But this time-change result stands not alone: Length and mass and charge are equally affected (TeLeMaCh theorem, for T, L, M, Ch) as is easy to prove. Hence gravity is much more powerful than anticipated. Black holes have radically different properties, for example. And black holes are being tried to be generated at the LHC.

Why the planet-wide press curfew since 2008? Apparently a nobelist cooperating with the LHC gave the parole that the new result is “absolute nonsense,” which would be gratifying to believe if true. But the press for some reason forgot to ask back: “Did you discuss your counterclaim with the author?” (No.) “Has anyone proved it?” (No.) “Is anyone ready to defend it publicly in dialog?” (No.)

Continue reading “Short Speech to the World” »

Mar 24, 2011

The Existential Importance of Life Extension

Posted by in categories: biological, biotech/medical, ethics, existential risks, life extension
The field of life extension is broad and ranges from regenerative medicine to disease prevention by nutritional supplements and phytomedicine. Although the relevance of longevity and disease prevention to existential risks is less apparent than the prevention of large-scale catastrophic scenarios, it does have a high relevance to the future of our society. The development of healthy longevity and the efficiency of modern medicine in treating age-related diseases and the question of how well we can handle upcoming issues related to public health will have a major impact on our short-term future in the next few decades. Therefore, the prospect of healthy life extension plays important roles at both a personal and a societal level.
From a personal perspective, a longevity-compatible lifestyle, nutrition and supplementary regimen may not only help us to be active and to live longer, but optimizing our health and fitness also increase our energy, mental performance and capacities for social interaction. This aids our ability to work on the increasingly complex tasks of a 21st-century world that can make a positive impact in society, such as work on existential risk awareness and problem-solving. Recently, I wrote a basic personal orientation on the dietary supplement aspect of basic life extension with an audience of transhumanists, technology advocates with a high future shock level and open-minded scientists in mind, which is available here.
On a societal level, however, aging population and public health issues are serious. A rapid increase of some diseases of civilization, whose prevalence also climbs rapidly with advanced age, is on the march. For example, Type-II-Diabetes is rapidly on its way to becoming an insurmountable problem for China and the WHO projects COPD, the chronic lung disease caused by smoking and pollution, as the third leading cause of death in 2030.
While the currently accelerating increase of diseases of civilization may not collapse society itself, the costs associated with an overaging population could significantly damage societal order, collapse health systems and impact economies given the presently insufficient state of medicine and prevention. The magnitude, urgency and broad spectrum of consequences of age-related diseases of civilization currently being on the march is captured very well in this 5-minute fact-filled presentation on serious upcoming issues of aging in our society today by the LifeStar Foundation. Viewing is highly recommended. In short, a full-blown health crisis appears to be looming over many western countries, including the US, due to the high prevalence of diseases of aging in a growing population. This may require more resources than available if disease prevention efforts are not stepped up as early as possible. In that case, the required urgent action to deal with such a crisis may deprive other technological sectors of time and resources, affecting organizations and governments, including their capacity to manage vital infrastructure, existential risks and planning for a safe and sufficient progress of technology. Hence, not caring about the major upcoming health issue by stepping up disease prevention efforts according to latest biomedical knowledge may indirectly pose challenges affecting our capabilities to handle existential risks.

Continue reading “The Existential Importance of Life Extension” »

Mar 14, 2011

“CERN Ignores Scientific Proof That Its Current Experiment Puts Earth in Jeopardy”

Posted by in categories: existential risks, particle physics

I deeply feel with the Japanese victims of a lack of human caution regarding nuclear reactors. Is it compatible with this atonement if I desperately ask the victims to speak up with me against the next consciously incurred catastrophe made in Switzerland? If the proof of danger stays un-disproved, CERN is currently about to melt the earth’s mantle along with its core down to a 2-cm black hole in perhaps 5 years time at a probability of 8 percent. A million nuclear power plants pale before the “European Centre for Nuclear Research.” CERN must not be allowed to go on shunning the scientific safety conference sternly advised by a Cologne court only six weeks ago.

I thank Lifeboat for distributing this message worldwide.

Mar 12, 2011

Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years

Posted by in categories: existential risks, particle physics

1) Mini black holes are both non-evaporating and uncharged.

2) The new unchargedness makes them much more likely to arise in the LHC (since electrons are no longer point-shaped in confirmation of string theory).

3) When stuck inside matter, mini black holes grow exponentially as “miniquasars” to shrink earth to 2 cm in perhaps 5 years time.

4) They go undetected by CERN’s detectors.

Continue reading “Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years” »

Mar 10, 2011

“Too Late for the Singularity?”

Posted by in categories: existential risks, lifeboat, particle physics

Ray Kurzweil is unique for having seen the unstoppable exponential growth of the computer revolution and extrapolating it correctly towards the attainment of a point which he called “singularity” and projects about 50 years into the future. At that point, the brain power of all human beings combined will be surpassed by the digital revolution.

The theory of the singularity has two flaws: a reparable and a hopefully not irreparable one. The repairable one has to do with the different use humans make of their brains compared to that of all animals on earth and presumably the universe. This special use can, however, be clearly defined and because of its preciousness be exported. This idea of “galactic export” makes Kurzweil’s program even more attractive.

The second drawback is nothing Ray Kurzweil has anything to do with, being entirely the fault of the rest of humankind: The half century that the singularity still needs to be reached may not be available any more.

The reason for that is CERN. Even though presented in time with published proofs that its proton-colliding experiment will with a probability of 8 percent produce a resident exponentially growing mini black hole eating earth inside out in perhaps 5 years time, CERN prefers not to quote those results or try and dismantle them before acting. Even the call by an administrative court (Cologne) to convene the overdue scientific safety conference before continuing was ignored when CERN re-ignited the machine a week ago.

Continue reading “"Too Late for the Singularity?"” »

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Feb 10, 2011

New Implication of Einstein’s Happiest Thought Is Last Hope for Planet

Posted by in categories: existential risks, particle physics

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

Continue reading “New Implication of Einstein's Happiest Thought Is Last Hope for Planet” »

Jan 30, 2011

Summary of My Scientific Results on the LHC-Induced Danger to the Planet

Posted by in categories: existential risks, particle physics

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Continue reading “Summary of My Scientific Results on the LHC-Induced Danger to the Planet” »

Jan 17, 2011

Stories We Tell

Posted by in categories: complex systems, existential risks, futurism, lifeboat, policy


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

Continue reading “Stories We Tell” »

Nov 26, 2010

“Rogue states” as a source of global risk

Posted by in categories: existential risks, geopolitics

Some countries are a threat as possible sources of global risk. First of all we are talking about countries which have developed, but poorly controlled military programs, as well as the specific motivation that drives them to create a Doomsday weapon. Usually it is a country that is under threat of attack and total conquest, and in which the control system rests on a kind of irrational ideology.

The most striking example of such a global risk are the efforts of North Korea’s to weaponize Avian Influenza (North Korea trying to weaponize bird flu http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=50093), which may lead to the creation of the virus capable of destroying most of Earth’s population.

There is not really important, what is primary: an irrational ideology, increased secrecy, the excess of military research and the real threat of external aggression. Usually, all these causes go hand in hand.

The result is the appearance of conditions for creating the most exotic defenses. In addition, an excess of military scientists and equipment allows individual scientists to be, for example, bioterrorists. The high level of secrecy leads to the fact that the state as a whole does not know what they are doing in some labs.

Continue reading “"Rogue states" as a source of global risk” »