April 2015 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Sun, 04 Jun 2017 18:59:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 They’re Alive! Watch These Mini 3D Printed Organs Beat Just Like Hearts https://lifeboat.com/blog/2015/04/theyre-alive-watch-these-mini-3d-printed-organs-beat-just-like-hearts Thu, 30 Apr 2015 22:00:44 +0000 http://lifeboat.com/blog/?p=14125 By — SingularityHubhttp://cdn.singularityhub.com/wp-content/uploads/2015/04/heart-and-liver-organoids-1000x400.jpg

There’s something almost alchemical going on at the Wake Forest Institute for Regenerative Medicine. Scientists there have genetically transformed skin cells into heart cells and used them to 3D print mini-organs that beat just like your heart. Another darker organoid fused to a mini-heart mimics your liver.

The work, developed by Anthony Atala and his Wake Forest team for the “Body on a Chip” project, aims to simulate bodily systems by microfluidically linking up miniature organs—hearts, livers, blood vessels, and lungs—and testing new drug treatments and chemicals or studying the effects of viruses on them.

Read more

]]>
Should We Arm the International Space Station With Lasers to Destroy Space Junk? https://lifeboat.com/blog/2015/04/should-we-arm-the-international-space-station-with-lasers-to-destroy-space-junk Thu, 30 Apr 2015 10:00:58 +0000 http://lifeboat.com/blog/?p=14123 By — SingularityHubhttp://cdn.singularityhub.com/wp-content/uploads/2015/04/earth-space-junk-rings-1-1000x400.jpg

If you look closely enough, Earth has rings. NASA estimates there are some 500,000 pieces of space debris in orbit. Space junk, traveling up to ten times the speed of a bullet, endangers satellites and spacecraft—and it is very, very hard to remove. A team of scientists, however, think they have a way: Lasers.

A recent paper by Tokyo’s Riken institute proposes using a telescope on the International Space Station (ISS) to track small bits of space junk. A laser on the telescope would target and zap the junk, sending it crashing into the atmosphere, where it would vaporize—no longer a threat to humans or satellites.  Read more

]]>
Robotic EQ https://lifeboat.com/blog/2015/04/robotic-eq https://lifeboat.com/blog/2015/04/robotic-eq#comments Wed, 29 Apr 2015 22:14:20 +0000 http://lifeboat.com/blog/?p=14143 Can an emotional component to artificial intelligence be a benefit?

Robots with passion! Emotional artificial intelligence! These concepts have been in books and movies lately. A recent example of this is the movie Ex Machina. Now, I’m not an AI expert, and cannot speak to the technological challenges of developing an intelligent machine, let alone an emotional one. I do however, know a bit about problem solving, and that does relate to both intelligence and emotions. It is this emotional component of problem solving that leads me to speculate on the potential implications to humanity if powerful AI’s were to have human emotions.

Why the question about emotions? In a roundabout way, it has to do with how we observe and judge intelligence. The popular way to measure intelligence in a computer is the Turing test. If it can fool a person through conversation, into thinking that the computer is a person, then it has human level intelligence. But we know that the Turing test by itself is insufficient to be a true intelligence test. Sounding human during dialog is not the primary method we use to gauge intelligence in other people or in other species. Problem solving seems to be a reliable test of intelligence either through IQ tests that involve problem solving, or through direct real world problem solving.

As an example of problem solving, we judge how intelligent a rat is by how fast it can navigate a maze to get to food. Let’s look at this in regards to the first few steps in problem solving.

Fundamental to any problem solving, is recognizing that a problem exists. In this example, the rat is hungry. It desires to be full. It can observe its current state (hungry) and compare it with its desired state (full) and determine that a problem exists. It is now motivated to take action.

Desire is intimately tied to emotion. Since it is desire that allows the determination of whether or not a problem exists, one can infer that emotions allow for the determination that a problem exists. Emotion is a motivator for action.

Once a problem is determined to exist, it is important to define the problem. In this simple example this step isn’t very complex. The rat desires food, and food is not present. It must find food, but its options for finding food are constrained by the confines of the maze. But the rat may have other things going on. It might be colder than it would prefer. This presents another problem. When confronted with multiple problems, the rat must prioritize which problem to address first. Problem prioritization again is in the realm of desires and emotions. It might be mildly unhappy with the temperature, but very unhappy with its hunger state. In this case one would expect that it will maximize its happiness by solving the food problem before curling up to solve its temperature problem. Emotions are again in play, driving behavior which we see as action.

The next steps in problem solving are to generate and implement a solution to the problem. In our rat example, it will most likely determine if this maze is similar to ones it has seen in the past, and try to run the maze as fast as it can to get to the food. Not a lot of emotion involved in these steps with the possible exception of happiness if it recognizes the maze. However, if we look at problems that people face, emotion is riddled in the process of developing and implementing solutions. In the real world environment, problem solving almost always involves working with other people. This is because they are either the cause of the problem, or are key to the problem’s solution, or both. These people have a great deal of emotions associated with them. Most problems require negation to solve. Negotiation by its nature is charged with emotion. To be effective in problem solving a person has to be able to interpret and understand the wants and desires (emotions) of others. This sounds a lot like empathy.

Now, let’s apply the emotional part of problem solving to artificial intelligence. The problem step of determining whether or not a problem exists doesn’t require emotion if the machine in question is a thermostat or a Roomba. A thermostat doesn’t have its own desired temperature to maintain. Its desired temperature is determined by a human and given to the thermostat. That human’s desires are a based on a combination of learned preferences from personal experience, and hardwired preferences based on millions of years of evolution. The thermostat is simply a tool.

Now the whole point behind an AI, especially an artificial general intelligence, is that it is not a thermostat. It is supposed to be intelligent. It must be able to problem solve in a real world environment that involves people. It has to be able to determine that problems exists and then prioritize those problems, without asking for a human to help it. It has to be able to socially interact with people. It must identify and understand their motivations and emotions in order to develop and implement solutions. It has to be able to make these choices which are based on desires, without the benefit of millions of years of evolution that shaped the desires that we have. If we want it to be able to truly pass for human level intelligence, it seems we’ll have to give it our best preferences and desires to start with.

A machine that cannot chose its goals, cannot change its goals. A machine without that choice, if given the goal of say maximizing pin production, will creatively and industriously attempt to convert the entire planet into pins. Such a machine cannot question instructions that are illegal or unethical. Here lies the dilemma. What is more dangerous, the risk that someone will program an AI that has no choice, to do bad things, or the risk that an AI will decide to do bad things on its own?

No doubt about it, this is a tough call. I’m sure some AIs will be built with minimal or no preferences with the intent that it will be simply a very smart tool. But without giving an AI a set of desires and preferences to start with that are comparable to those of humans, we will be interacting with a truly alien intelligence. I for one, would be happier with an AI that at least felt regret about killing someone, than I would be with an AI that didn’t.

]]>
https://lifeboat.com/blog/2015/04/robotic-eq/feed 1
The Cities Science Fiction Built https://lifeboat.com/blog/2015/04/the-cities-science-fiction-built Wed, 29 Apr 2015 22:00:01 +0000 http://lifeboat.com/blog/?p=14117 Adam Rothstein | Motherboard
“In the city of the future, trains would rocket across overhead rails, airplanes would dive from the sky to land on the roof, and skyscrapers would stretch their sinewed limbs into the heavens to feel the hot pulse of radio waves beating across the planet. This artistic, but unbridled enthusiasm was the last century’s first expression of wholesale tech optimism.”  Read more

]]>
Injustice, Ethereum and the information renaissance… https://lifeboat.com/blog/2015/04/injustice-ethereum-and-the-information-renaissance Wed, 29 Apr 2015 19:24:08 +0000 http://lifeboat.com/blog/?p=14138 Quoted: “I recall reading somewhere that “Ethereum is to Bitcoin as an iPhone is to a calculator”, which is a pretty good analogy. Bitcoin proved to us that it was possible to keep a tamper-proof system synchronised across the globe. There really is no reason the same system can’t be applied to other problems in the same way we apply normal computers to them.

Ethereum is a single computer spread out over the internet, processing the information we all feed it together. I guess you could call it a ‘shared consciousness’ if you wanted to.

In this computer, information cannot be suppressed. In this computer, ideas and trust rule. Work and reputation are visible and independently verifiable. Anyone can contribute and everyone is automatically safe. Collaboration will overcome privatisation as people work together to build an open network of ideas contributing to the betterment of us all. They are calling it internet 3.0. And though web 2.0 was a thing in some ways, I think we’ll look back at everything up until this point as the first internet. The internet we built by adapting old communication lines into new ways of communicating. The internet we built when we were still used to centralising responsibility for things.”

Read the article here > http://pospi.spadgos.com/2014/11/30/injustice-ethereum-and-t…naissance/

]]>
What if Your Computer Cared About What Makes You Smile? https://lifeboat.com/blog/2015/04/what-if-your-computer-cared-about-what-makes-you-smile Wed, 29 Apr 2015 10:00:58 +0000 http://lifeboat.com/blog/?p=14115 Kyle Vanhemert | WIRED465617108-crop
“Your computer isn’t a person, but as psychological studies have shown, you often can’t help but treat it like a one. “  Read more

]]>
The Coming Problem of Our iPhones Being More Intelligent Than Us https://lifeboat.com/blog/2015/04/the-coming-problem-of-our-iphones-being-more-intelligent-than-us Tue, 28 Apr 2015 22:00:12 +0000 http://lifeboat.com/blog/?p=14119 By — SingularityHubhttp://cdn.singularityhub.com/wp-content/uploads/2015/04/brain-microchip-moores-law-2-1000x400.jpg

Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years — until 2025 — giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.  Read more

]]>
Putting the Data Science into Journalism https://lifeboat.com/blog/2015/04/putting-the-data-science-into-journalism Tue, 28 Apr 2015 10:00:20 +0000 http://lifeboat.com/blog/?p=14113 Keith Kirkpatrick | Communications of the ACM
“‘There’s this whole realization that if news organizations are to attract an audience, it’s not going to be by spewing out the stuff that everyone else is spewing out,’ says David Herzog, a professor at the University of Missouri …‘It is about giving the audience information that is unique, in-depth, that allows them to explore the data, and also engage with the audience.’”  Read more

]]>
3 Cities Using Open Data in Creative Ways to Solve Problems https://lifeboat.com/blog/2015/04/3-cities-using-open-data-in-creative-ways-to-solve-problems Mon, 27 Apr 2015 22:00:19 +0000 http://lifeboat.com/blog/?p=14110 Tanvi Misra | CityLabImage Flickr/Bart Everson
“The idea is not just to teach city governments new techniques on harvesting open data to tackle urban problems and measure performance, but to replicate successful approaches that are already out there.“Read more

]]>
What if one country achieves the singularity first? https://lifeboat.com/blog/2015/04/what-if-one-country-achieves-the-singularity-first Mon, 27 Apr 2015 10:00:54 +0000 http://lifeboat.com/blog/?p=14107 Zoltan Istvan | Motherboard
“Once uploaded, would your digital self be able to interact with your biological self? Would one self be able to help the other? Or would laws force an either-or situation, where uploaded people’s biological selves must remain in cryogenically frozen states or even be eliminated altogether?”  Read more

]]>