Menu

Blog

Archive for the ‘futurism’ category: Page 1176

Jul 12, 2010

The True Cost of Ignoring Nonhumans

Posted by in categories: biological, ethics, futurism, policy

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Continue reading “The True Cost of Ignoring Nonhumans” »

Jul 6, 2010

What’s your idea to BodyShock the Future?

Posted by in categories: biotech/medical, futurism

I’m working on this project with Institute for the Future — calling on voices everywhere for ideas to improve the future of global health. It would be great to get some visionary Lifeboat ideas entered!

INSTITUTE FOR THE FUTURE ANNOUNCES BODYSHOCK:
CALL FOR ENTRIES ON IDEAS TO TRANSFORM LIFESTYLES AND THE HUMAN BODY TO IMPROVE HEALTH IN THE NEXT DECADE

“What can YOU envision to improve and reinvent health and well-being for the future?” Anyone can enter, anyone can vote, anyone can change the future of global health.

Continue reading “What's your idea to BodyShock the Future?” »

Jun 12, 2010

My presentation on Humanity + summit

Posted by in categories: futurism, robotics/AI

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

Jun 11, 2010

H+ Conference and the Singularity Faster

Posted by in categories: futurism, robotics/AI

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Continue reading “H+ Conference and the Singularity Faster” »

Jun 8, 2010

Transitions

Posted by in category: futurism

King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?

Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.

I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?

So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?

Continue reading “Transitions” »

Jun 5, 2010

Friendly AI: What is it, and how can we foster it?

Posted by in categories: complex systems, ethics, existential risks, futurism, information science, policy, robotics/AI

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Continue reading “Friendly AI: What is it, and how can we foster it?” »

Apr 21, 2010

Software and the Singularity

Posted by in categories: futurism, robotics/AI

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

Continue reading “Software and the Singularity” »

Apr 18, 2010

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

Posted by in categories: biological, biotech/medical, business, complex systems, education, events, existential risks, futurism, geopolitics, human trajectories, information science, media & arts, neuroscience, robotics/AI

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

Continue reading “Ray Kurzweil to keynote "H+ Summit @ Harvard — The Rise Of The Citizen Scientist"” »

Apr 2, 2010

Technological Singularity and Acceleration Studies: Call for Papers

Posted by in category: futurism

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

Mar 23, 2010

Risk intelligence

Posted by in categories: education, events, futurism, geopolitics, policy, polls

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.