Menu

Blog

Archive for the ‘ethics’ category: Page 5

Jan 15, 2024

Organoid Intelligence Overtaking AI

Posted by in categories: biotech/medical, ethics, robotics/AI

Organoid intelligence is the growing of mini-brains from human stem cells, which has potential benefits for medical research and treatments.

However, there are significant ethical concerns related to the possibility of creating conscious entities and the potential for misuse. Organoid intelligence could offer valuable insights into neurological diseases, but we must establish a framework for their creation and treatment to ensure ethical use. As we continue to develop this technology, we must approach it with caution due to the potential dire consequences of its misuse.

Continue reading “Organoid Intelligence Overtaking AI” »

Dec 26, 2023

Inner Experience — Direct Access to Reality: A Complementarist Ontology and Dual Aspect Monism Support a Broader Epistemology

Posted by in categories: ethics, mathematics, neuroscience

Ontology, the ideas we have about the nature of reality, and epistemology, our concepts about how to gain knowledge about the world, are interdependent. Currently, the dominant ontology in science is a materialist model, and associated with it an empiricist epistemology. Historically speaking, there was a more comprehensive notion at the cradle of modern science in the middle ages. Then “experience” meant both inner, or first person, and outer, or third person, experience. With the historical development, experience has come to mean only sense experience of outer reality. This has become associated with the ontology that matter is the most important substance in the universe, everything else-consciousness, mind, values, etc.,-being derived thereof or reducible to it. This ontology is insufficient to explain the phenomena we are living with-consciousness, as a precondition of this idea, or anomalous cognitions. These have a robust empirical grounding, although we do not understand them sufficiently. The phenomenology, though, demands some sort of non-local model of the world and one in which consciousness is not derivative of, but coprimary with matter. I propose such a complementarist dual aspect model of consciousness and brain, or mind and matter. This then also entails a different epistemology. For if consciousness is coprimary with matter, then we can also use a deeper exploration of consciousness as happens in contemplative practice to reach an understanding of the deep structure of the world, for instance in mathematical or theoretical intuition, and perhaps also in other areas such as in ethics. This would entail a kind of contemplative science that would also complement our current experiential mode that is exclusively directed to the outside aspect of our world. Such an epistemology might help us with various issues, such as good theoretical and other intuitions.

Keywords: complementarity; consciousness; contemplative science; dual aspect model; epistemology; introspection; materialism; ontology.

Copyright © 2020 Walach.

Dec 14, 2023

Google’s New AI, Gemini, Beats ChatGPT In 30 Of 32 Test Categories

Posted by in categories: biotech/medical, ethics, law, mathematics, robotics/AI

Google has released a new Pro model of its latest AI, Gemini, and company sources say it has outperformed GPT-3.5 (the free version of ChatGPT) in widespread testing. According to performance reports, Gemini Ultra exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development. Google has been accused of lagging behind OpenAI’s ChatGPT, widely regarded as the most popular and powerful in the AI space. Google says Gemini was trained to be multimodal, meaning it can process different types of media such as text, pictures, video, and audio.

Insider also reports that, with a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

The Google-based AI comes in three sizes, or stages, for the Gemini platform: Ultra, which is the flagship model, Pro and Nano (designed for mobile devices). According to reports from TechCrunch, the company says it’s making Gemini Pro available to enterprise customers through its Vertex AI program, and for developers in AI Studio, on December 13. Reports indicate that the Pro version can also be accessed via Bard, the company’s chatbot interface.

Dec 12, 2023

Five things you need to know about the EU’s new AI Act

Posted by in categories: ethics, robotics/AI

The EU is poised to effectively become the world’s AI police, creating binding rules on transparency, ethics, and more.

It’s done.

Dec 12, 2023

What To Know About Where ChatGPT Is Going In 2024

Posted by in categories: business, education, ethics, robotics/AI, sustainability

It’s hard to believe, but generative AI — the seemingly ubiquitous technology behind ChatGPT — was launched just one year ago, in late November 2022.


Still, as technologists discover more and more use cases for saving time and money in the enterprise, schools, and businesses the world over are struggling to find the technology’s rightful balance in the “real world.”

As the year has progressed, the rapid onset and proliferation has led to not only rapid innovation and competitive leapfrogging, but a continued wave of moral and ethical debates and has even led to early regulation and executive orders on the implementation of AI around the world as well as global alliances — like the recent Meta + IBM AI Alliance — to try and develop open frameworks and greater standards in the implementation of safe and economically sustainable AI.

Continue reading “What To Know About Where ChatGPT Is Going In 2024” »

Dec 8, 2023

Tech firms failing to ‘walk the walk’ on ethical AI, report says

Posted by in categories: ethics, robotics/AI

Stanford University researchers say AI ethics practitioners report lacking institutional support at their companies.

Dec 7, 2023

The Neurobiological Platform for Moral Intuitions: Dr. Patricia Churchland

Posted by in category: ethics

Meet Thousands of Lonely Women. Forget About Loneliness. Let Yourself Be Happy.

Dec 1, 2023

Study uncovers link between musical preferences and our inner moral compass

Posted by in categories: ethics, media & arts, robotics/AI

A new study, published in PLOS ONE, has uncovered a remarkable connection between individuals’ musical preferences and their moral values, shedding new light on the profound influence that music can have on our moral compass.

The research, conducted by a team of scientists at Queen Mary University of London and ISI Foundation in Turin, Italy, employed machine learning techniques to analyze the lyrics and audio features of individuals’ favorite songs, revealing a complex interplay between and morality.

“Our study provides compelling evidence that music preferences can serve as a window into an individual’s ,” stated Dr. Charalampos Saitis, one of the senior authors of the study and Lecturer in Digital Music Processing at Queen Mary University of London’s School of Electronic Engineering and Computer Science.

Nov 30, 2023

Artificial Intelligence Needs Spiritual Intelligence

Posted by in categories: ethics, robotics/AI

One group, A.I. and Faith, convenes tech executives to discuss the important questions about faith’s contributions to artificial intelligence. The founder David Brenner explained, “The biggest questions in life are the questions that A.I. is posing, but it’s doing it mostly in isolation from the people who’ve been asking those questions for 4,000 years.” Questions such as “what is the purpose of life?” have long been tackled by religious philosophy and thought. And yet these questions remained answered and programmed by secular thinkers, and sometimes by those antagonistic toward religion. Technology creators, innovators, and corporations should create accessibility and coalitions of diverse thinkers to inform religious thought in technological development including artificial intelligence.

Independent of development, faith leaders have a critical role to play in moral accountability and upholding human rights through the technology we already use in everyday life including social media. The harms of religious illiteracy, misinformation, and persecution are largely perpetrated through existing technology such as hate speech on Facebook, which quickly escalated to mass atrocities against the Rohingya Muslims in Myanmar. Individuals who have faith in the future must take an active role in combating misinformation, hate speech, and online bullying of any group.

The future of artificial intelligence will require spiritual intelligence, or “the human capacity to ask questions about the ultimate meaning of life and the integrated relationship between us and the world in which we live.” Artificial intelligence becomes a threat to humanity when humans fail to protect freedom of conscience, thought, and religion and when we allow our spiritual intelligence to be superseded by the artificial.

Nov 29, 2023

OpenAI’s board might have been dysfunctional–but they made the right choice. Their defeat shows that in the battle between AI profits and ethics, it’s no contest

Posted by in categories: ethics, finance, robotics/AI

Altman seemed to understand his responsibility to run a viable, enduring organization and keep its employees happy. He was on his way to pulling off a tender offer–a secondary round of investment in AI that would give the company much-needed cash and provide employees with the opportunity to cash out their shares. He also seemed very comfortable engaging in industry-wide issues like regulation and standards. Finding a balance between those activities is part of the work of corporate leaders and perhaps the board felt that Altman failed to find such a balance in the months leading up to his firing.

Microsoft seems to be the most clear-eyed about the interests it must protect: Microsoft’s! By hiring Sam Altman and Greg Brockman (a co-founder and president of OpenAI who resigned from OpenAI in solidarity with Altman), offering to hire more OpenAI staff, and still planning to collaborate with OpenAI, Satya Nadella hedged his bets. He seems to understand that by harnessing both the technological promise of AI, as articulated by OpenAI, and the talent to fulfill that promise, he is protecting Microsoft’s interest, a perspective reinforced by the financial markets’ positive response to his decision to offer Altman a job and further reinforced by his own willingness to support Altman’s return to OpenAI. Nadella acted with the interests of his company and its future at the forefront of his decision-making and he appears to have covered all the bases amidst a rapidly unfolding set of circumstances.

OpenAI employees may not like the board’s dramatic retort that allowing the company to be destroyed would be consistent with the mission–but those board members saw it that way.

Page 5 of 80First23456789Last