March 2010 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Fri, 21 Oct 2016 07:40:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Critical Request to CERN Council and Member States on LHC Risks https://lifeboat.com/blog/2010/03/critical-request-to-cern-council-and-member-states-on-lhc-risks https://lifeboat.com/blog/2010/03/critical-request-to-cern-council-and-member-states-on-lhc-risks#comments Sat, 27 Mar 2010 19:37:26 +0000 http://lifeboat.com/blog/?p=779 Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

]]>
https://lifeboat.com/blog/2010/03/critical-request-to-cern-council-and-member-states-on-lhc-risks/feed 6
Risk intelligence https://lifeboat.com/blog/2010/03/risk-intelligence Tue, 23 Mar 2010 09:59:10 +0000 http://lifeboat.com/blog/?p=774 A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/.  It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates.  So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

]]>
Reduction of human intelligence as global risk https://lifeboat.com/blog/2010/03/reduction-of-human-intelligence-as-global-risk Fri, 12 Mar 2010 19:05:00 +0000 http://lifeboat.com/blog/?p=761 Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

]]>
Why AI could fail? https://lifeboat.com/blog/2010/03/why-ai-could-fail https://lifeboat.com/blog/2010/03/why-ai-could-fail#comments Wed, 10 Mar 2010 13:24:44 +0000 http://lifeboat.com/blog/?p=754 AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Economic:
14) The growth of computer programs has led to an increase in the number of failures that were so spectacular that of automation software had to be abandoned. This led to a drop in demand for powerful computers and stop Moore’s Law, before it reached its physical limits. The same increase in complexity and number of failures made it difficult the creation of AI.
15) AI is possible, but it does not give a significant advantage over the man in any sense of the results, nor speed, nor the cost of computing. For example, a simulation of human worth one billion dollars, and she has no idea how a to self-optimize. But people found ways to break up their intellectual abilities by injecting the stem cell precursors of neurons, which further increases the competitive advantage of people.
16) No person engaged in the development of AI, because it is considered that this is impossible. It turns out self-fulfilling prophecy. AI is engaged only by fricks, who do not have enough of their own intellect and money. But the scale of the Manhattan Project could solve the problem of AI, but just no one is taking.
17) Technology of uploading consciousness into a computer has so developed, that this is enough for all practical purposes, have been associated with AI, and therefore there is no need to create an algorithmic AI. This upload is done mechanically, through scanning, and still no one understands what happens in the brain.

Political:
18) AI systems are prohibited or severely restricted for ethical reasons, so that people still feel themselves above all. Perhaps are allowed specialized AI systems in military and aerospace.
19) AI is prohibited for safety reasons, as it represents too great global risk.
20) AI emerged and established his authority over the Earth, but does not show itself, except it does not allow others to develop their own AI projects.
21) AI did not appear  as was is imagined, and therefore no one call it AI (eg, the distributed intelligence of social networks).

]]>
https://lifeboat.com/blog/2010/03/why-ai-could-fail/feed 7
Reflections on Avatar https://lifeboat.com/blog/2010/03/reflections-on-avatar https://lifeboat.com/blog/2010/03/reflections-on-avatar#comments Sat, 06 Mar 2010 19:51:25 +0000 http://lifeboat.com/blog/?p=752 I recently watched James Cameron’s Avatar in 3D. It was an enjoyable experience in some ways, but overall I left dismayed on a number of levels.

It was enjoyable to watch the lush three-dimensional animation and motion capture controlled graphics. I’m not sure that 3D will take over – as many now expect – until we get rid of the glasses (and there are emerging technologies to do that albeit, the 3D effect is not yet quite as good), but it was visually pleasing.

While I’m being positive, I was pleased to see Cameron’s positive view of science in that the scientists are “good” guys (or at least one good gal) with noble intentions on learning the wisdom of the Na’vi natives and on negotiating a diplomatic solution.

The Na’vi were not completely technology-free. They basically used the type of technology that Native Americans used hundreds of years ago – same clothing, domesticated animals, natural medicine, and bows and arrows.

They were in fact exactly like Native Americans. How likely is that? Life on this distant moon in another star system has evolved creatures that look essentially the same as earthly creatures, with very minor differences (dogs, horses, birds, rhinoceros-like animals, and so on), not to mention humanoids that are virtually the same as humans here on Earth. That’s quite a coincidence.

Cameron’s conception of technology a hundred years from now was incredibly unimaginative, even by Hollywood standards. For example, the munitions that were supposed to blow up the tree of life looked like they were used in World War II (maybe even World War I). Most of the technology looked primitive, even by today’s standards. The wearable exoskeleton robotic devices were supposed to be futuristic, but these already exist, and are beginning to be deployed. The one advanced technology was the avatar technology itself. But in that sense, Avatar is like the world of the movie AI, where they had human-level cyborgs, but nothing else had changed: AI featured 1980’s cars and coffee makers. As for Avatar, are people still going to use computer screens in a hundred years? Are they going to drive vehicles?

I thought the story and script was unimaginative, one-dimensional, and derivative. The basic theme was “evil corporation rapes noble natives.” And while that is a valid theme, it was done without the least bit of subtlety, complexity, or human ambiguity. The basic story was taken right from Dances with Wolves. And how many (thousands of) times have we seen a final battle scene that comes down to a battle between the hero and the anti-hero that goes through various incredible stages — fighting on a flying airplane, in the trees, on the ground, etc? And (spoiler alert) how predictable was it that the heroine would pull herself free at the last second and save the day?

None of the creatures were especially creative. The flying battles were like Harry Potter’s Quidditch, and the flying birds were derivative of Potter creatures, including mastering flying on the back of big bird creatures. There was some concept of networked intelligence but it was not especially coherent. The philosophy was the basic Hollywood religion about the noble cycle of life.

The movie was fundamentally anti-technology. Yes, it is true, as I pointed out above, that the natives use tools, but these are not the tools we associate with modern technology. And it is true that the Sigourney Weaver character and her band of scientists intend to help the Na’vi with their human technology (much like international aid workers might do today in developing nations), but we never actually see that happen. I got the sense that Cameron was loath to show modern technology doing anything useful. So even when Weaver’s scientist becomes ill, the Na’vi attempt to heal her only with the magical life force of the tree of life.

In Cameron’s world, Nature is always wise and noble, which indeed it can be, but he fails to show its brutal side. The only thing that was brutal, crude, and immoral in the movie was the “advanced” technology. Of course, one could say that it was the user of the technology that was immoral (the evil corporation), but that is the only role for technology in the world of Avatar.

In addition to being evil, the technology of the Avatar world of over 100 years from now is also weaker than nature, so the rhinoceros-like creatures are able to defeat the tanks circa 2100. It was perhaps a satisfying spectacle to watch, but how realistic is that? The movie shows the natural creatures communicating with each other with some kind of inter-species messaging and also showed the tree of life able to remember voices. But it is actually real-world technology that can do those things right now. In the Luddite world of this movie, the natural world should and does conquer the brutish world of technology.

In my view, there is indeed a crudeness to first-industrial-revolution technology. The technology that will emerge in the decades ahead will be altogether different. It will enhance the natural world while it transcends its limitations. Indeed, it is only through the powers of exponentially growing info, bio, and nano technologies that we will be able to overcome the problems created by first-industrial-revolution technologies such as fossil fuels. This idea of technology transcending natural limitations was entirely lost in Cameron’s vision. Technology was just something crude and immoral, something to be overcome, something that Nature does succeed in overcoming.

It was visually pleasing; although even here I thought it could have been better. Some of the movement of the blue natives was not quite right and looked like the unrealistic movement one sees of characters in video games, with jumps that show poor modeling of gravity.

The ending (spoiler alert) was a complete throwaway. The Na’vi defeat the immoral machines and their masters in a big battle, but if this mineral the evil corporation was mining is indeed worth a fortune per ounce, they would presumably come back with a more capable commander. Yet we hear Jake’s voice at the end saying that the mineral is no longer needed. If that’s true, then what was the point of the entire battle?

The Na’vi are presented as the ideal society, but consider how they treat their women. The men get to “pick” their women, and Jake is offered to take his choice once he earns his place in the society. Jake makes the heroine his wife, knowing full well that his life as a Na’vi could be cut off at any moment. And what kind of child would they have? Well, perhaps these complications are too subtle for the simplistic Avatar plot.

]]>
https://lifeboat.com/blog/2010/03/reflections-on-avatar/feed 7