Menu

Blog

Archive for the ‘ethics’ category: Page 2

Jul 11, 2024

Could AIs become conscious? Right now, we have no way to tell

Posted by in categories: biological, ethics, law, robotics/AI

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Jul 10, 2024

The Promise and Peril of AI

Posted by in categories: biotech/medical, drones, ethics, existential risks, law, military, robotics/AI

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Jul 9, 2024

Thomas Hartung and colleagues | The future of organoid intelligence | Frontiers Forum Deep Dive 2023

Posted by in categories: biotech/medical, chemistry, computing, engineering, ethics, health, neuroscience, policy

Eexxeccellent.


Human brains outperform computers in many forms of processing and are far more energy efficient. What if we could harness their power in a new form of biological computing?

Continue reading “Thomas Hartung and colleagues | The future of organoid intelligence | Frontiers Forum Deep Dive 2023” »

Jul 9, 2024

Philosopher David Chalmers: We Can Be Rigorous in Thinking about the Future

Posted by in categories: bioengineering, ethics, life extension, Ray Kurzweil, robotics/AI, singularity

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity; consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Jul 5, 2024

Anders Sandberg: We Are All Amazingly Stupid, But We Can Get Better

Posted by in categories: ethics, singularity, transhumanism

Want to find out how and why Anders Sandberg got interested in transhumanism and ethics? Want to hear his take on the singularity? Check out his interview for SingularityWeblog.com

Jul 5, 2024

Exploring AI, Cognitive Science, and Ethics | Deep Interview with Jay Friedenberg

Posted by in categories: biotech/medical, ethics, finance, robotics/AI, science, singularity

In this thought-provoking lecture, Prof. Jay Friedenberg from Manhattan College delves into the intricate interplay between cognitive science, artificial intelligence, and ethics. With nearly 30 years of teaching experience, Prof. Friedenberg discusses how visual perception research informs AI design, the implications of brain-machine interfaces, the role of creativity in both humans and AI, and the necessity for ethical considerations as technology evolves. He emphasizes the importance of human agency in shaping our technological future and explores the concept of universal values that could guide the development of AGI for the betterment of society.

00:00 Introduction to Jay Friedenberg.
01:02 Connecting Cognitive Science and AI
02:36 Human Augmentation and Technology.
03:50 Brain-Machine Interfaces.
05:43 Balancing Optimism and Caution in AI
07:52 Free Will vs Determinism.
12:34 Creativity in Humans and Machines.
16:45 Ethics and Value Alignment in AI
20:09 Conclusion and Future Work.

Continue reading “Exploring AI, Cognitive Science, and Ethics | Deep Interview with Jay Friedenberg” »

Jun 25, 2024

AI needs design consciousness

Posted by in categories: ethics, robotics/AI

My thoughts on ethics and human-centric design in AI advancements.

Jun 20, 2024

Exploring Social Neuroscience — Serious Science

Posted by in categories: ethics, neuroscience, science

Is our brain responsible for how we react to people who are different from us? Why can’t people with autism tell lies? How does the brain produce empathy? Why is imitation a fundamental trait of any social interaction? What are the secret advantages of teamwork? How does the social environment influence the brain? Why is laughter different from any other emotion?

This course is aimed at deepening our understanding of how the brain shapes and is shaped by social behavior, exploring a variety of topics such as the neural mechanisms behind social interactions, social cognition, theory of mind, empathy, imitation, mirror neurons, interacting minds, and the science of laughter.

Continue reading “Exploring Social Neuroscience — Serious Science” »

Jun 17, 2024

Are Children The Future?: Longtermism, Pronatalism, and Epistemic Discounting

Posted by in categories: economics, ethics, existential risks, life extension, policy

From the article:

Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.


Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.

Continue reading “Are Children The Future?: Longtermism, Pronatalism, and Epistemic Discounting” »

Jun 15, 2024

Beyond Binary: Exploring a Spectrum of Artificial Sentience

Posted by in categories: ethics, robotics/AI

Envision AI evolving beyond mere imitation, surpassing human intelligence to redefine the boundaries of consciousness and ethics.

Page 2 of 8212345678Last