Toggle light / dark theme

Existing numerical computing libraries lack native support for physical units, limiting their application in rigorous scientific computing. Here, the authors developed SAIUnit, which integrates physical units, and unit-aware mathematical functions and transformations into numerical computing libraries for artificial intelligence-driven scientific computing.

Gödel’s incompleteness theorem is used by both advocates and adversaries of strong AI to show that computers can(not) perform the same feats as humans. This article extends the construction through which Gödel proved his theorem, in order to allow a broader interpretation, showing that neither side has exploited its arguments to the fullest extend, and that the evidence can never be conclusive.

Dr.ir. C.J.B. Jongeneel & prof.dr. H. Koppelaar, Delft University of Technology, Faculty of Technical Mathematics and Informatics, Section of Knowledge Based Systems.

1 Introduction

A research team from the Institute of Statistical Mathematics and Panasonic Holdings Corporation has developed a machine learning algorithm, ShotgunCSP, that enables fast and accurate prediction of crystal structures from material compositions. The algorithm achieved world-leading performance in crystal structure prediction benchmarks.

Crystal structure prediction seeks to identify the stable or metastable crystal structures for any given chemical compound adopted under specific conditions. Traditionally, this process relies on iterative evaluations using time-consuming first-principles calculations and solving energy minimization problems to find stable atomic configurations. This challenge has been a cornerstone of materials science since the early 20th century.

Recently, advancements in computational technology and generative AI have enabled new approaches in this field. However, for large-scale or , the exhaustive exploration of vast phase spaces demands enormous computational resources, making it an unresolved issue in materials science.

Scientists apply principles of math and physics to unravel the mystery of how the endoplasmic reticulum, an organelle vital to cellular life, constantly reshapes and reorganizes itself. As a second-year Ph.D. student and physicist, Zuben Scott hadn’t thought much about the endoplasmic reticulum s

Penn Engineers have developed the first programmable chip that can train nonlinear neural networks using light—a breakthrough that could dramatically speed up AI training, reduce energy use and even pave the way for fully light-powered computers.

While today’s AI chips are electronic and rely on electricity to perform calculations, the new chip is photonic, meaning it uses beams of light instead. Described in Nature Photonics, the chip reshapes how light behaves to carry out the nonlinear mathematics at the heart of modern AI.

“Nonlinear functions are critical for training ,” says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the paper’s senior author. “Our aim was to make this happen in photonics for the first time.”

Earth rotates, the Sun rotates, the Milky Way rotates – and a new model suggests the entire Universe could be rotating. If confirmed, it could ease a significant tension in cosmology.

The Universe is expanding, but exactly how fast is a contentious question. Two different methods of measurement return two very different speeds – and as the measurements become more precise, each becomes more certain. This discrepancy is known as the Hubble tension, and it’s reaching crisis levels in physics.

So for a new study, physicists in Hungary and the US added a small rotation to a model of the Universe – and this mathematical massage seemed to quickly ease the tension.

Quantum physics already feels like a puzzle, but now scientists have made it more literal. A team of mathematicians from the University of Colorado Boulder has designed a quantum Rubik’s cube, with infinite possible states and some weird new moves available to solve it.

The classic (and classical) Rubik’s cube is what’s known as a permutation puzzle, which requires players to perform certain actions to rearrange one of a number of possible permutations into a ‘solved’ state.

In the case of the infamous cube, that’s around 43 quintillion possible combinations of small colored blocks being sorted into six, consistently-colored faces through a series of constrained movements.

Quantum mechanics is, at least at first glance and at least in part, a mathematical machine for predicting the behaviors of microscopic particles — or, at least, of the measuring instruments we use to explore those behaviors — and in that capacity, it is spectacularly successful: in terms of power and precision, head and shoulders above any theory we have ever had. Mathematically, the theory is well understood; we know what its parts are, how they are put together, and why, in the mechanical sense (i.e., in a sense that can be answered by describing the internal grinding of gear against gear), the whole thing performs the way it does, how the information that gets fed in at one end is converted into what comes out the other. The question of what kind of a world it describes, however, is controversial; there is very little agreement, among physicists and among philosophers, about what the world is like according to quantum mechanics. Minimally interpreted, the theory describes a set of facts about the way the microscopic world impinges on the macroscopic one, how it affects our measuring instruments, described in everyday language or the language of classical mechanics. Disagreement centers on the question of what a microscopic world, which affects our apparatuses in the prescribed manner, is, or even could be, like intrinsically; or how those apparatuses could themselves be built out of microscopic parts of the sort the theory describes.[1]

That is what an interpretation of the theory would provide: a proper account of what the world is like according to quantum mechanics, intrinsically and from the bottom up. The problems with giving an interpretation (not just a comforting, homey sort of interpretation, i.e., not just an interpretation according to which the world isn’t too different from the familiar world of common sense, but any interpretation at all) are dealt with in other sections of this encyclopedia. Here, we are concerned only with the mathematical heart of the theory, the theory in its capacity as a mathematical machine, and — whatever is true of the rest of it — this part of the theory makes exquisitely good sense.

A trio of animal physiologists at the University of Tübingen, in Germany, has found that at least one species of crow has the ability to recognize geometric regularity. In their study published in the journal Science Advances, Philipp Schmidbauer, Madita Hahn and Andreas Nieder conducted several experiments that involved testing crows on their ability to recognize geometric shapes.

Recognizing regularity in geometric shapes means being able to pick out one that is different from others in a group—picking out a plastic star, for example, when it is placed among several plastic moons. Testing for the ability to recognize geometric regularity has been done with many animals, including chimps and bonobos. Until now, this ability has never been observed in any creature except for humans.

Because of that, the team started with a bit of skepticism when they began testing carrion crows. In their work, the testing was done using computer screens—the birds were asked to peck the outlier in a group; if they chose correctly, they got a food treat. The team chose to test carrion crows because prior experiments have shown them to have exceptional intelligence and mathematical capabilities.

Quantum computers promise to outperform today’s traditional computers in many areas of science, including chemistry, physics, and cryptography, but proving they will be superior has been challenging. The most well-known problem in which quantum computers are expected to have the edge, a trait physicists call “quantum advantage,” involves factoring large numbers, a hard math problem that lies at the root of securing digital information.

In 1994, Caltech alumnus Peter Shor (BS ‘81), then at Bell Labs, developed a that would easily factor a large number in just seconds, whereas this type of problem could take a classical computer millions of years. Ultimately, when quantum computers are ready and working—a goal that researchers say may still be a decade or more away—these machines will be able to quickly factor large numbers behind cryptography schemes.

But, besides Shor’s algorithm, researchers have had a hard time coming up with problems where quantum computers will have a proven advantage. Now, reporting in a recent Nature Physics study titled “Local minima in ,” a Caltech-led team of researchers has identified a common physics problem that these futuristic machines would excel at solving. The problem has to do with simulating how materials cool down to their lowest-energy states.