Researchers at TU Wien and FU Berlin have, for the first time, measured what happens when quantum information is lost, shedding new light on the deep links between quantum physics, thermodynamics, and information theory. At first glance, heat and information seem like completely unrelated ideas.
The quantum physics community is buzzing with excitement after researchers at Rice University finally observed a phenomenon that had eluded scientists for over 70 years. This breakthrough, recently published in Science Advances is known as the superradiant phase transition (SRPT), represents a significant milestone in quantum mechanics and opens extraordinary possibilities for future technological applications.
In 1954, physicist Robert H. Dicke proposed an intriguing theory suggesting that under specific conditions, large groups of excited atoms could emit light in perfect synchronization rather than independently. This collective behavior, termed superradiance, was predicted to potentially create an entirely new phase of matter through a complete phase transition.
For over seven decades, this theoretical concept remained largely confined to equations and speculation. The primary obstacle was the infamous “no-go theorem,” which seemingly prohibited such transitions in conventional light-based systems. This theoretical barrier frustrated generations of quantum physicists attempting to observe this elusive phenomenon.
Over the past decades, physicists have been trying to develop increasingly sophisticated and precise clocks to reliably measure the duration of physical processes that unfold over very short periods of time, helping to validate various theoretical predictions. These include so-called quantum clocks, timekeeping systems that leverage the principles of quantum mechanics to measure time with extremely high precision.
A new study led by researchers at the Universities of Oxford, Cambridge and Manchester has achieved a major advance in quantum materials, developing a method to precisely engineer single quantum defects in diamond—an essential step toward scalable quantum technologies. The results have been published in the journal Nature Communications.
Stephen Wolfram joins Brian Greene to explore the computational basis of space, time, general relativity, quantum mechanics, and reality itself.
This program is part of the Big Ideas series, supported by the John Templeton Foundation.
Participant: Stephen Wolfram. Moderator: Brian Greene.
0:00:00 — Introduction. 01:23 — Unifying Fundamental Science with Advanced Mathematical Software. 13:21 — Is It Possible to Prove a System’s Computational Reducibility? 24:30 — Uncovering Einstein’s Equations Through Software Models. 37:00 — Is connecting space and time a mistake? 49:15 — Generating Quantum Mechanics Through a Mathematical Network. 01:06:40 — Can Graph Theory Create a Black Hole? 01:14:47 — The Computational Limits of Being an Observer. 01:25:54 — The Elusive Nature of Particles in Quantum Field Theory. 01:37:45 — Is Mass a Discoverable Concept Within Graph Space? 01:48:50 — The Mystery of the Number Three: Why Do We Have Three Spatial Dimensions? 01:59:15 — Unraveling the Mystery of Hawking Radiation. 02:10:15 — Could You Ever Imagine a Different Career Path? 02:16:45 — Credits.
Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics? In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant. Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics. The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy. One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.
What we discuss: 00:00 Intro. 07:45 The history of scientific models of reality: structural, mathematical and computational. 14:40 Late 2010’s: a shift to computational models of systems. 20:20 The Principle of Computational Equivalence (PCE) 24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance. 27:50 The importance of the passage of time to Consciousness. 28:45 Irreducibility and the limits of science. 33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility. 42:20 Observer Theory and the Wolfram Physics Project. 45:30 Modelling the relations between discrete units of Space: Hypergraphs. 47:30 The progress of time is the computational process that is updating the network of relations. 50:30 We ’make’ space. 51:30 Branchial Space — different quantum histories of the world, branching and merging. 54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements. 56:30 Branchial Space VS Many Worlds interpretation. 58:50 Rulial Space: All possible rules of all possible interconnected branches. 01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible. 01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather. 01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory. 01:20:30 Entanglement explained — common ancestors in branchial space. 01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time. 01:37:30 At the molecular level the laws of physics are reversible. 01:40:30 What looks random to us in entropy is actually full of the data. 01:45:30 Entropy defined in computational terms. 01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence. 01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology. 01:55:30 Reductionism in an irreducible world: saying a lot from very little input.
References: “The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.
In a groundbreaking discovery, physicists from Aalto University have unveiled a new framework that unites gravity with the forces described by the Standard Model of particle physics, potentially bringing us closer to the long-awaited “Theory of Everything.” This discovery doesn’t just reframe gravity—it offers a fresh perspective on how the fundamental forces of nature might work together¹
Scientists have developed an exact approach to a key quantum error correction problem once believed to be unsolvable, and have shown that what appeared to be hardware-related errors may in fact be due to suboptimal decoding.
The new algorithm, called PLANAR, achieved a 25% reduction in logical error rates when applied to Google Quantum AI’s experimental data. This discovery revealed that a quarter of what the tech giant attributed to an “error floor” was actually caused by their decoding method, rather than genuine hardware limitations.
Quantum computers are extraordinarily sensitive to errors, making quantum error correction essential for practical applications.
UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.
The technology could serve as a universal translator for quantum computers—enabling them to talk to one another over long distances and converting up to 95% of a signal with virtually no noise. And it all fits on a silicon chip, the same material found in everyday computers.
“It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his Ph.D. at UBC’s faculty of applied science and the Stewart Blusson Quantum Matter Institute (SBQMI).