Superintelligent AI is “likely” to cause an existential catastrophe for humanity, according to a new paper, but we don’t have to wait to rein in algorithms.

A team of researchers from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Stony Brook University have devised a new quantum algorithm to compute the lowest energies of molecules at specific configurations during chemical reactions, including when their chemical bonds are broken. As described in Physical Review Research, compared to similar existing algorithms, including the team’s previous method, the new algorithm will significantly improve scientists’ ability to accurately and reliably calculate the potential energy surface in reacting molecules.
For this work, Deyu Lu, a Center for Functional Nanomaterials (CFN) physicist at Brookhaven Lab, worked with Tzu-Chieh Wei, an associate professor specializing in quantum information science at the C.N. Yang Institute for Theoretical Physics at Stony Brook University, Qin Wu, a theorist at CFN, and Hongye Yu, a Ph.D. student at Stony Brook.
“Understanding the quantum mechanics of a molecule, how it behaves at an atomic level, can provide key insight into its chemical properties, like its stability and reactivity,” said Lu.
Or so goes the theory. Most CIM chips running AI algorithms have solely focused on chip design, showcasing their capabilities using simulations of the chip rather than running tasks on full-fledged hardware. The chips also struggle to adjust to multiple different AI tasks—image recognition, voice perception—limiting their integration into smartphones or other everyday devices.
This month, a study in Nature upgraded CIM from the ground up. Rather than focusing solely on the chip’s design, the international team—led by neuromorphic hardware experts Dr. H.S. Philip Wong at Stanford and Dr. Gert Cauwenberghs at UC San Diego—optimized the entire setup, from technology to architecture to algorithms that calibrate the hardware.
The resulting NeuRRAM chip is a powerful neuromorphic computing behemoth with 48 parallel cores and 3 million memory cells. Extremely versatile, the chip tackled multiple AI standard tasks—such as reading hand-written numbers, identifying cars and other objects in images, and decoding voice recordings—with over 84 percent accuracy.
This places Drake in the company of towering physicists with equations named after them, including James Clerk Maxwell and Erwin Schrödinger. Unlike those, Drake’s equation does not encapsulate a law of nature. Instead, it combines some poorly known probabilities into an informed estimate.
Whatever reasonable values you feed into the equation (see image below), it is hard to avoid the conclusion that we shouldn’t be alone in the galaxy. Drake remained a proponent and a supporter of the search for extraterrestrial life throughout his days, but has his equation taught us anything?
Drake’s equation may look complicated, but its principles are rather simple. It states that in a galaxy as old as ours, the number of civilizations that are detectable by virtue of them broadcasting their presence must equate to the rate at which they arise, multiplied by their average lifetime.
What if humans were gods instead?? Join us… and find out more!
Subscribe for more from Unveiled ► https://wmojo.com/unveiled-subscribe.
In this video, Unveiled takes a closer look at one of the ultimate what if scenarios — what if humans became GODS? According to some predictions, science and technology will one day lead us to godlike power… so what will we do with that responsibility? Will we use it for good or for bad?
This is Unveiled, giving you incredible answers to extraordinary questions!
Find more amazing videos for your curiosity here:
Can Science Solve the God Equation? — https://youtu.be/YPgKH-adjik.
Are Ancient Civilisations Still Hidden on Earth? — https://youtu.be/cbtPJmJErmc.
0:00 Intro.
Choosing interesting dissertation topics in ML is the first choice of Master’s and Doctorate scholars nowadays. Ph.D. candidates are highly motivated to choose research topics that establish new and creative paths toward discovery in their field of study. Selecting and working on a dissertation topic in machine learning is not an easy task as machine learning uses statistical algorithms to make computers work in a certain way without being explicitly programmed. The main aim of machine learning is to create intelligent machines which can think and work like human beings. This article features the top 10 ML dissertations for Ph.D. students to try in 2022.
Text Mining and Text Classification: Text mining is an AI technology that uses NLP to transform the free text in documents and databases into normalized, structured data suitable for analysis or to drive ML algorithms. This is one of the best research and thesis topics for ML projects.
Recognition of Everyday Activities through Wearable Sensors and Machine Learning: The goal of the research detailed in this dissertation is to explore and develop accurate and quantifiable sensing and machine learning techniques for eventual real-time health monitoring by wearable device systems.
“If we get a similar hit rate in detecting texture in tumors, the potential for early diagnosis is huge,” says scientist.
Researchers at University College London.
The potentially early-stage fatal tumors in humans could be noticed by the new x-ray method that collaborates with a deep-learning Artificial Intelligence (AI) algorithm to detect explosives in luggages, according to a report published by MIT Technology Review on Friday.
“Neuromorphic computing could offer a compelling alternative to traditional AI accelerators by significantly improving power and data efficiency for more complex AI use cases, spanning data centers to extreme edge applications.”
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Can computer systems develop to the point where they can think creatively, identify people or items they have never seen before, and adjust accordingly — all while working more efficiently, with less power? Intel Labs is betting on it, with a new hardware and software approach using neuromorphic computing, which, according to a recent blog post, “uses new algorithmic approaches that emulate how the human brain interacts with the world to deliver capabilities closer to human cognition.”
While this may sound futuristic, Intel’s neuromorphic computing research is already fostering interesting use cases, including how to add new voice interaction commands to Mercedes-Benz vehicles; create a robotic hand that delivers medications to patients; or develop chips that recognize hazardous chemicals.
Drake’s equation may look complicated, but its principles are really rather simple. It states that, in a galaxy as old as ours, the number of civilizations that are detectable by virtue of them broadcasting their presence must equate to the rate at which they arise, multiplied by their average lifetime.
Putting a value on the rate at which civilizations occur might seem to be guesswork, but Drake realized that it can be broken down into more tractable components.
He stated that the total rate is equal to the rate at which suitable stars are formed, multiplied by the fraction of those stars that have planets. This is then multiplied by the number of planets that are capable of bearing life per system, times the fraction of those planets where life gets started, multiplied by the fraction of those where life becomes intelligent, times the fraction of those that broadcast their presence.
The very first industrial revolution historically kicked off with the introduction of steam-and water-powered technology. We have come a long way since then, with the current fourth industrial revolution, or Industry 4.0, being focused on utilizing new technology to boost industrial efficiency.
Some of these technologies include the internet of things (IoT), cloud computing, cyber-physical systems, and artificial intelligence (AI). AI is the key driver of Industry 4.0, automating intelligent machines to self-monitor, interpret, diagnose, and analyze all by themselves. AI methods, such as machine learning (ML), deep learning (DL), natural language processing (NLP), and computer vision (CV), help industries forecast their maintenance needs and cut down on downtime.
However, to ensure the smooth, stable deployment and integration of AI-based systems, the actions and results of these systems must be made comprehensible, or, in other words, “explainable” to experts. In this regard, explainable AI (XAI) focuses on developing algorithms that produce human-understandable results made by AI-based systems. Thus, XAI deployment is useful in Industry 4.0.