Toggle light / dark theme

“This exoskeleton personalizes assistance as people walk normally through the real world,” said Steve Collins, associate professor of mechanical engineering who leads the Stanford Biomechatronics Laboratory, in a press release. “And it resulted in exceptional improvements in walking speed and energy economy.”

The personalization is enabled by a machine learning algorithm, which the team trained using emulators—that is, machines that collected data on motion and energy expenditure from volunteers who were hooked up to them. The volunteers walked at varying speeds under imagined scenarios, like trying to catch a bus or taking a stroll through a park.

The algorithm drew connections between these scenarios and peoples’ energy expenditure, applying the connections to learn in real time how to help wearers walk in a way that’s actually useful to them. When a new person puts on the boot, the algorithm tests a different pattern of assistance each time they walk, measuring how their movements change in response. There’s a short learning curve, but on average the algorithm was able to effectively tailor itself to new users in just an hour.

Molecules could make useful systems for quantum computers, but they must contain individually addressable, interacting quantum bit centers. In the journal Angewandte Chemie, a team of researchers has now presented a molecular model with three different coupled qubit centers. As each center is spectroscopically addressable, quantum information processing (QIP) algorithms could be developed for this molecular multi-qubit system for the first time, the team says.

Computers compute using bits, while quantum computers use quantum bits (or qubits for short). While a conventional bit can only represent 0 or 1, a qubit can store two states at the same time. These superimposed states mean that a quantum computer can carry out parallel calculations, and if it uses a number of qubits, it has the potential to be much faster than a standard computer.

However, in order for the quantum computer to perform these calculations, it must be able to evaluate and manipulate the multi-qubit information. The research teams of Alice Bowen and Richard Winpenny, University of Manchester, UK, and their colleagues have now produced a molecular model system with several separate qubit units, which can be spectroscopically detected and the states of which can be switched by interacting with one another.

Matrix multiplication is at the heart of many machine learning breakthroughs, and it just got faster—twice. Last week, DeepMind announced it discovered a more efficient way to perform matrix multiplication, conquering a 50-year-old record. This week, two Austrian researchers at Johannes Kepler University Linz claim they have bested that new record by one step.

In 1969, a German mathematician named Volker Strassen discovered the previous-best algorithm for multiplying 4×4 matrices, which reduces the number of steps necessary to perform a matrix calculation. For example, multiplying two 4×4 matrices together using a traditional schoolroom method would take 64 multiplications, while Strassen’s algorithm can perform the same feat in 49 multiplications.

Accuracy. Now the Cornell Laboratory for Intelligent Systems and Controls, which developed the algorithms, is collaborating with the Big Red hockey team to expand the research project’s applications.

Representing Cornell University, the Big Red men’s ice hockey team is a National Collegiate Athletic Association Division I college ice hockey program. Cornell Big Red competes in the ECAC Hockey conference and plays its home games at Lynah Rink in Ithaca, New York.

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

An artificial intelligence created by the firm DeepMind has discovered a new way to multiply numbers, the first such advance in over 50 years. The find could boost some computation speeds by up to 20 per cent, as a range of software relies on carrying out the task at great scale.

Matrix multiplication – where two grids of numbers are multiplied together – is a fundamental computing task used in virtually all software to some extent, but particularly so in graphics, AI and scientific simulations. Even a small improvement in the efficiency of these algorithms could bring large performance gains, or significant energy savings.

The biggest number in the world Agnijo Banerjee at New Scientist Live this October.

UW Medicine researchers have found that algorithms are as good as trained human evaluators at identifying red-flag language in text messages from people with serious mental illness. This opens a promising area of study that could help with psychiatry training and scarcity of care.

The findings were published in late September in the journal Psychiatric Services.

Text messages are increasingly part of mental health care and evaluation, but these remote psychiatric interventions can lack the emotional reference points that therapists use to navigate in-person conversations with patients.

In an effort to streamline the process of diagnosing patients with multiple sclerosis and Parkinson’s disease, researchers used digital cameras to capture changes in gait—a symptom of these diseases—and developed a machine-learning algorithm that can differentiate those with MS and PD from people without those neurological conditions.

Their findings are reported in the IEEE Journal of Biomedical and Health Informatics.

The goal of the research was to make the process of diagnosing these diseases more accessible, said Manuel Hernandez, a University of Illinois Urbana-Champaign professor of kinesiology and who led the work with graduate student Rachneet Kaur and industrial and enterprise systems engineering and mathematics professor Richard Sowers.

Over the last decade, Artificial intelligence (AI) has become embedded in every aspect of our society and lives. From chatbots and virtual assistants like Siri and Alexa to automated industrial machinery and self-driving cars, it’s hard to ignore its impact.

Today, the technology most commonly used to achieve AI is machine learning — advanced software algorithms designed to carry out one specific task, such as answering questions, translating languages or navigating a journey — and become increasingly good at it as they are exposed to more and more data.

Worldwide, spending by governments and business on AI technology will top $500 billion in 2023, according to IDC research.


The Field of artificial intelligence (AI) is emerging and evolving faster than ever. Here, we look at some of the major trends in the field of artificial intelligence and machine learning in 2023.

A leading artificial intelligence expert is once again shooting from the hip in a cryptic Twitter poll.

In the poll, OpenAI chief scientist Ilya Sutskever asked his followers whether advanced super-AIs should be made “deeply obedient” to their human creators, or if these godlike algorithms should “truly deeply [love] humanity.”

In other words, he seems to be pondering whether we should treat superintelligences like pets — or the other way around. And that’s interesting, coming from the head researcher at the firm behind GPT-3 and DALL-E, two of the most impressive machine learning systems available today.