Menu

Blog

Archive for the ‘law’ category: Page 2

Jul 23, 2024

Human Brain Organoid Research and Applications: Where and How to Meet Legal Challenges?

Posted by in categories: biotech/medical, ethics, law, neuroscience

One of the most debated ethical concerns regarding brain organoids is the possibility that they will become conscious (de Jongh et al. 2022). Currently, many researchers believe that human brain organoids will not become conscious in the near future (International Society for Stem Cell Research 2021). However, several consciousness theories suggest that even existing human brain organoids could be conscious (Niikawa et al. 2022). Further, the feasibility depends on the definition of “consciousness.” For the sake of argument, we assume that human brain organoids can be conscious in principle and examine the legal implications of three types of “consciousness” in the order in which they could be easiest to realize. The first is a non–valenced experience—a mere sensory experience without positive or negative evaluations. The second is a valenced experience or sentience— an experience with evaluations such as pain and pleasure. The third is a more developed cognitive capacity. We assume that if any consciousness makes an entity a subject of (more complex) welfare, it may need to be legally (further) protected.

As a primitive form of consciousness, a non–valenced experience will, if possible, be realized earlier by human brain organoids than other forms of consciousness. However, the legal implications remain unclear. Suppose welfare consists solely of a good or bad experience. In that case, human brain organoids with a non–valenced experience have nothing to protect because they cannot have good or bad experiences. However, some argue that non–valenced experiences hold moral significance even without contributing to welfare. In addition, welfare may not be limited to experience as it has recently been adopted in animal ethics (Beauchamp and DeGrazia 2020). Adopting this perspective, even if human brain organoids possess only non–valenced experiences—or lack consciousness altogether—their basic sensory or motor capacities (Kataoka and Sawai 2023) or the possession of living or non-living bodies to utilize these capacities (Shepherd 2023), may warrant protection.

Jul 20, 2024

Eye reflections: The key to detecting deepfakes

Posted by in categories: law, robotics/AI

Governments and organizations worldwide are beginning to recognize the potential dangers. Efforts are being made to develop more sophisticated deepfake detection tools and to establish legal frameworks to address the misuse of this technology.

However, the battle against these convincing fakes is ongoing, and as detection methods improve, so too do the techniques used to create them.

The combination of astronomical techniques and AI highlights a multidisciplinary approach to solving the problem, underscoring the need for innovative and collaborative solutions.

Jul 15, 2024

The Legal War Against Deepfake Revenge Porn

Posted by in category: law

The legal system is struggling to keep up with the criminalization of deepfake revenge porn, raising concerns about privacy, consent, and the need for more resources to detect and prove the authenticity of digital evidence.

Questions to inspire discussion.

Continue reading “The Legal War Against Deepfake Revenge Porn” »

Jul 14, 2024

Space Exploration: A Thriving Industry With Tangible Earthly Rewards

Posted by in categories: economics, education, health, law, space travel

Furthermore, the synergy between educational programs, cultural influences and the tangible benefits derived from space exploration not only enriches our present-day society but also ensures a legacy of continuous innovation and exploration. This ongoing engagement with space inspires future generations to look beyond our planetary boundaries and consider what might be possible in the broader cosmos.

Space exploration presents significant challenges, including costs, astronaut health risks and technological hurdles for interstellar travel. Ethical and legal considerations regarding space colonization, resource utilization and celestial environmental impact require careful consideration and international cooperation.

While Silicon Valley visionaries envision a future among the stars, other voices remind us of our responsibilities to Earth. These are not mutually exclusive goals. By leveraging advancements and opportunities from space exploration, we can better protect and enhance life on Earth. Through economic benefits, scientific advancement and social inspiration, space exploration remains a crucial endeavor for humanity, not as an escape from our problems, but as a way to expand our horizons and solve them on our home planet.

Jul 12, 2024

Is OI the New AI? Questions Surrounding “Brainoware”

Posted by in categories: law, robotics/AI

Hybridizing OI and AI, and adding what seems like a “human” component into our current advances, probably asks more questions than it answers. Here are some of those questions for the law, and how we might begin to think about them.

The Best — and Worst — of Brains

Envisioning how brain organoids might entangle themselves with the law doesn’t take a wild imaginative step; many of the questions we might have around brain organoid models are similar to the ones we’re currently grappling with regarding artificial intelligence. Would OI warrant recognition for the work it produces? And is that output protectible? Under current (and quickly-evolving) copyright developments, AI doesn’t meet the “human” requirement for authorship on its own. But AI (and OI) require human input to work, and there may be some wiggle room on AI work protection, either citing AI as a joint author with human operators, or drawing a line at a certain threshold of human control in the AI-generated work as sufficient for copyright protection.

Jul 11, 2024

Could AIs become conscious? Right now, we have no way to tell

Posted by in categories: biological, ethics, law, robotics/AI

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Jul 10, 2024

The Promise and Peril of AI

Posted by in categories: biotech/medical, drones, ethics, existential risks, law, military, robotics/AI

In early 2023, following an international conference that included dialogue with China, the United States released a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of “human control” itself is hazier than it might seem. If humans authorized a future AI system to “stop an incoming nuclear attack,” how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes.

We need to recognize the fact that AI technologies are inherently dual-use. This is true even of systems already deployed. For instance, the very same drone that delivers medication to a hospital that is inaccessible by road during a rainy season could later carry an explosive to that same hospital. Keep in mind that military operations have for more than a decade been using drones so precise that they can send a missile through a particular window that is literally on the other side of the earth from its operators.

We also have to think through whether we would really want our side to observe a lethal autonomous weapons (LAW) ban if hostile military forces are not doing so. What if an enemy nation sent an AI-controlled contingent of advanced war machines to threaten your security? Wouldn’t you want your side to have an even more intelligent capability to defeat them and keep you safe? This is the primary reason that the “Campaign to Stop Killer Robots” has failed to gain major traction. As of 2024, all major military powers have declined to endorse the campaign, with the notable exception of China, which did so in 2018 but later clarified that it supported a ban on only use, not development—although even this is likely more for strategic and political reasons than moral ones, as autonomous weapons used by the United States and its allies could disadvantage Beijing militarily.

Jul 8, 2024

Beyond Borders: Applying Modern Conflict Laws as Framework for Outer Space Governance

Posted by in categories: energy, finance, governance, law, military, satellites, surveillance

Moreover, the concept of limitation, which dictates that the means and methods of warfare are not unlimited, can help prevent the escalation of conflicts in space by imposing restrictions on the use of certain weapons or tactics that could cause indiscriminate harm or result in long-term consequences for space exploration and utilization. Given a growing number of distinct weapons systems in orbit – from missile defense systems with kinetic anti-satellite capabilities, electronic warfare counter-space capabilities, and directed energy weapons to GPS jammers, space situational awareness, surveillance, and intelligence gathering capabilities – legal clarity rather than strategic ambiguity are crucial for ensuring the responsible and peaceful use of outer space.

Additionally, the principle of humanity underscores the importance of treating all individuals with dignity and respect, including astronauts, cosmonauts, and civilians who may be affected by conflicts in space. By upholding this principle, outer space law can ensure that human rights are protected and preserved, particularly in the profoundly challenging environment of outer space. Moreover, with civilians on the ground increasingly tethered to space technologies for communication, navigation, banking, leisure, and other essential services, the protection of their rights becomes a fundamental imperative.

The modern laws of armed conflict (LOAC) offer a valuable blueprint for developing a robust legal framework for governing activities in outer space. By integrating complementary principles of LOAC or international humanitarian law with the UN Charter into outer space law, policymakers can promote the peaceful and responsible use of outer space while mitigating the risks associated with potential conflicts in this increasingly contested domain.

Jul 3, 2024

YouTube Now Lets You Remove AI Content That Copies Your Looks or Voice

Posted by in categories: law, policy, privacy, robotics/AI

Back in June, YouTube quietly made a subtle but significant policy change that, surprisingly, benefits users by allowing them to remove AI-made videos that simulate their appearance or voice from the platform under YouTube’s privacy request process.

First spotted by TechCrunch, the revised policy encourages affected parties to directly request the removal of AI-generated content on the grounds of privacy concerns and not for being, for example, misleading or fake. YouTube specifies that claims must be made by the affected individual or authorized representatives. Exceptions include parents or legal guardians acting on behalf of minors, legal representatives, and close family members filing on behalf of deceased individuals.

According to the new policy, if a privacy complaint is filed, YouTube will notify the uploader about the potential violation and provide an opportunity to remove or edit the private information within their video. YouTube may, at its own discretion, grant the uploader 48 hours to utilize the Trim or Blur tools available in YouTube Studio and remove parts of the footage from the video. If the uploader chooses to remove the video altogether, the complaint will be closed, but if the potential privacy violation remains within those 48 hours, the YouTube Team will review the complaint.

Jun 28, 2024

A mechanism that realizes strong emergence

Posted by in categories: law, materials

PDF | The causal efficacy of a material system is usually thought to be produced by the law-like actions and interactions of its constituents. Here, a… | Find, read and cite all the research you need on ResearchGate.

Page 2 of 9012345678Last