Menu

Blog

Archive for the ‘ethics’ category: Page 36

Jul 23, 2020

Intelligence community rolls out guidelines for ethical use of artificial intelligence

Posted by in categories: ethics, robotics/AI, security

The U.S. intelligence community (IC) on Thursday rolled out an “ethics guide” and framework for how intelligence agencies can responsibly develop and use artificial intelligence (AI) technologies.

Among the key ethical requirements were shoring up security, respecting human dignity through complying with existing civil rights and privacy laws, rooting out bias to ensure AI use is “objective and equitable,” and ensuring human judgement is incorporated into AI development and use.

The IC wrote in the framework, which digs into the details of the ethics guide, that it was intended to ensure that use of AI technologies matches “the Intelligence Community’s unique mission purposes, authorities, and responsibilities for collecting and using data and AI outputs.”

Jul 6, 2020

Study tests whether AI can convincingly answer existential questions

Posted by in categories: Elon Musk, ethics, robotics/AI

A new study has explored whether AI can provide more attractive answers to humanity’s most profound questions than history’s most influential thinkers.

Researchers from the University of New South Wales first fed a series of moral questions to Salesforce’s CTRL system, a text generator trained on millions of documents and websites, including all of Wikipedia. They added its responses to a collection of reflections from the likes of Plato, Jesus Christ, and, err, Elon Musk.

The team then asked more than 1,000 people which musings they liked best — and whether they could identify the source of the quotes.

Jun 27, 2020

AI gatekeepers are taking baby steps toward raising ethical standards

Posted by in categories: ethics, robotics/AI, surveillance

For years, Brent Hecht, an associate professor at Northwestern University who studies AI ethics, felt like a voice crying in the wilderness. When he entered the field in 2008, “I recall just agonizing about how to get people to understand and be interested and get a sense of how powerful some of the risks [of AI research] could be,” he says.

To be sure, Hecht wasn’t—and isn’t—the only academic studying the societal impacts of AI. But the group is small. “In terms of responsible AI, it is a sideshow for most institutions,” Hecht says. But in the past few years, that has begun to change. The urgency of AI’s ethical reckoning has only increased since Minneapolis police killed George Floyd, shining a light on AI’s role in discriminatory police surveillance.

This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

Jun 27, 2020

Three Big Tech Players Back Out of Facial Recognition Market

Posted by in categories: business, ethics, robotics/AI

In the span of 72 hours, both IBM and Amazon backed out of the facial recognition business this week.

It’s a chess match on the geopolitical playing board, with AI ethics and data bias in play.

Jun 26, 2020

Is it time to replace one of the cornerstones of animal research?

Posted by in categories: biotech/medical, ethics

But as millions of animals continue to be used in biomedical research each year, and new legislation calls on federal agencies to reduce and justify their animal use, some have begun to argue that it’s time to replace the three Rs themselves. “It was an important advance in animal research ethics, but it’s no longer enough,” Tom Beauchamp told attendees last week at a lab animal conference.


Science talks with two experts in animal ethics who want to go beyond the three Rs.

Jun 25, 2020

If AI is going to help us in a crisis, we need a new kind of ethics

Posted by in categories: ethics, robotics/AI

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Continue reading “If AI is going to help us in a crisis, we need a new kind of ethics” »

Jun 21, 2020

3 thoughts on “Arrival of Gene-Edited Babies: What lies ahead?”

Posted by in categories: biotech/medical, ethics, genetics

By Valentina Lagomarsino figures by Sean Wilson

Nearly four months ago, Chinese researcher He Jiankui announced that he had edited the genes of twin babies with CRISPR. CRISPR, also known as CRISPR/Cas9, can be thought of as “genetic scissors” that can be programmed to edit DNA in any cell. Last year, scientists used CRISPR to cure dogs of Duchenne muscular dystrophy. This was a huge step forward for gene therapies, as the potential of CRISPR to treat otherwise incurable diseases seemed possible. However, a global community of scientists believe it is premature to use CRISPR in human babies because of inadequate scientific review and a lack of international consensus regarding the ethics of when and how this technology should be used.

Early regulation of gene-editing technology.

Jun 13, 2020

Ethics Review Boards and AI Self-Driving Cars

Posted by in categories: ethics, robotics/AI, transportation

What does this have to do with AI self-driving cars?

AI Self-Driving Cars Will Need to Make Life-or-Death Judgements

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.

Apr 20, 2020

Britain Is Developing an AI-Powered Predictive Policing System

Posted by in categories: ethics, health, robotics/AI

What police would do with the information has yet to be determined. The head of WMP told New Scientist they won’t be preemptively arresting anyone; instead, the idea would be to use the information to provide early intervention from social or health workers to help keep potential offenders on the straight and narrow or protect potential victims.

But data ethics experts have voiced concerns that the police are stepping into an ethical minefield they may not be fully prepared for. Last year, WMP asked researchers at the Alan Turing Institute’s Data Ethics Group to assess a redacted version of the proposal, and last week they released an ethics advisory in conjunction with the Independent Digital Ethics Panel for Policing.

While the authors applaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.”

Apr 2, 2020

AI as mediator: ‘Smart’ replies help humans communicate during pandemic

Posted by in categories: ethics, robotics/AI

Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.

Humans having difficult conversations said they trusted artificially —the “smart” reply suggestions in texts—more than the people they were talking to, according to a new study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the system,” said Jess Hohenstein, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”

Page 36 of 82First3334353637383940Last