General relativity, part of the wide-ranging physical theory of relativity formed by the German-born physicist Albert Einstein. It was conceived by Einstein in 1915. It explains gravity based on the way space can ‘curve’, or, to put it more accurately, it associates the force of gravity with the changing geometry of space-time. (Einstein’s gravity)
The mathematical equations of Einstein’s general theory of relativity, tested time and time again, are currently the most accurate way to predict gravitational interactions, replacing those developed by Isaac Newton several centuries prior.
Over the last century, many experiments have confirmed the validity of both special and general relativity. In the first major test of general relativity, astronomers in 1919 measured the deflection of light from distant stars as the starlight passed by our sun, proving that gravity does, in fact, distort or curve space.
Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, have achieved a quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one). Their paper is published in the journal Physical Review Letters.
Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.
The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.
The mammalian retina is a complex system consisting out of cones (for color) and rods (for peripheral monochrome) that provide the raw image data which is then processed into successive layers of neurons before this preprocessed data is sent via the optical nerve to the brain’s visual cortex. In order to emulate this system as closely as possible, researchers at Penn State University have created a system that uses perovskite (methylammonium lead bromide, MAPbX3) RGB photodetectors and a neuromorphic processing algorithm that performs similar processing as the biological retina.
Panchromatic imaging is defined as being ‘sensitive to light of all colors in the visible spectrum’, which in imaging means enhancing the monochromatic (e.g. RGB) channels using panchromatic (intensity, not frequency) data. For the retina this means that the incoming light is not merely used to determine the separate colors, but also the intensity, which is what underlies the wide dynamic range of the Mark I eyeball. In this experiment, layers of these MAPbX3 (X being Cl, Br, I or combination thereof) perovskites formed stacked RGB sensors.
The output of these sensor layers was then processed in a pretrained convolutional neural network, to generate the final, panchromatic image which could then be used for a wide range of purposes. Some applications noted by the researchers include new types of digital cameras, as well as artificial retinas, limited mostly by how well the perovskite layers scale in resolution, and their longevity, which is a long-standing issue with perovskites. Another possibility raised is that of powering at least part of the system using the energy collected by the perovskite layers, akin to proposed perovskite-based solar panels.
Joscha Bach is a cognitive scientist focusing on cognitive architectures, consciousness, models of mental representation, emotion, motivation and sociality.
0:00:00 Introduction. 0:00:17 Bach’s work ethic / daily routine. 0:01:35 What is your definition of truth? 0:04:41 Nature’s substratum is a “quantum graph”? 0:06:25 Mathematics as the descriptor of all language. 0:13:52 Why is constructivist mathematics “real”? What’s the definition of “real”? 0:17:06 What does it mean to “exist”? Does “pi” exist? 0:20:14 The mystery of something vs. nothing. Existence is the default. 0:21:11 Bach’s model vs. the multiverse. 0:26:51 Is the universe deterministic. 0:28:23 What determines the initial conditions, as well as the rules? 0:30:55 What is time? Is time fundamental? 0:34:21 What’s the optimal algorithm for finding truth? 0:40:40 Are the fundamental laws of physics ultimately “simple”? 0:50:17 The relationship between art and the artist’s cost function. 0:54:02 Ideas are stories, being directed by intuitions. 0:58:00 Society has a minimal role in training your intuitions. 0:59:24 Why does art benefit from a repressive government? 1:04:01 A market case for civil rights. 1:06:40 Fascism vs communism. 1:10:50 Bach’s “control / attention / reflective recall” model. 1:13:32 What’s more fundamental… Consciousness or attention? 1:16:02 The Chinese Room Experiment. 1:25:22 Is understanding predicated on consciousness? 1:26:22 Integrated Information Theory of consciousness (IIT) 1:30:15 Donald Hoffman’s theory of consciousness. 1:32:40 Douglas Hofstadter’s “strange loop” theory of consciousness. 1:34:10 Holonomic Brain theory of consciousness. 1:34:42 Daniel Dennett’s theory of consciousness. 1:36:57 Sensorimotor theory of consciousness (embodied cognition) 1:44:39 What is intelligence? 1:45:08 Intelligence vs. consciousness. 1:46:36 Where does Free Will come into play, in Bach’s model? 1:48:46 The opposite of free will can lead to, or feel like, addiction. 1:51:48 Changing your identity to effectively live forever. 1:59:13 Depersonalization disorder as a result of conceiving of your “self” as illusory. 2:02:25 Dealing with a fear of loss of control. 2:05:00 What about heart and conscience? 2:07:28 How to test / falsify Bach’s model of consciousness. 2:13:46 How has Bach’s model changed in the past few years? 2:14:41 Why Bach doesn’t practice Lucid Dreaming anymore. 2:15:33 Dreams and GAN’s (a machine learning framework) 2:18:08 If dreams are for helping us learn, why don’t we consciously remember our dreams. 2:19:58 Are dreams “real”? Is all of reality a dream? 2:20:39 How do you practically change your experience to be most positive / helpful? 2:23:56 What’s more important than survival? What’s worth dying for? 2:28:27 Bach’s identity. 2:29:44 Is there anything objectively wrong with hating humanity? 2:30:31 Practical Platonism. 2:33:00 What “God” is. 2:36:24 Gods are as real as you, Bach claims. 2:37:44 What “prayer” is, and why it works. 2:41:06 Our society has lost its future and thus our culture. 2:43:24 What does Bach disagree with Jordan Peterson about? 2:47:16 The millennials are the first generation that’s authoritarian since WW2 2:48:31 Bach’s views on the “social justice” movement. 2:51:29 Universal Basic Income as an answer to social inequality, or General Artificial Intelligence? 2:57:39 Nested hierarchy of “I“s (the conflicts within ourselves) 2:59:22 In the USA, innovation is “cheating” (for the most part) 3:02:27 Activists are usually operating on false information. 3:03:04 Bach’s Marxist roots and lessons to his former self. 3:08:45 BONUS BIT: On societies problems.
Subscribe if you want more conversations on Theories of Everything, Consciousness, Free Will, God, and the mathematics / physics of each.
I’m producing an imminent documentary Better Left Unsaid http://betterleftunsaidfilm.com on the topic of “when does the left go too far?” Visit that site if you’d like to contribute to getting the film distributed (in 2020) and seeing more conversations like this.
Researchers in Canada and the United States have used deep learning to derive an antibiotic that can attack a resistant microbe, acinetobacter baumannii, which can infect wounds and cause pneumonia. According to the BBC, a paper in Nature Chemical Biology describes how the researchers used training data that measured known drugs’ action on the tough bacteria. The learning algorithm then projected the effect of 6,680 compounds with no data on their effectiveness against the germ.
In an hour and a half, the program reduced the list to 240 promising candidates. Testing in the lab found that nine of these were effective and that one, now called abaucin, was extremely potent. While doing lab tests on 240 compounds sounds like a lot of work, it is better than testing nearly 6,700.
Interestingly, the new antibiotic seems only to be effective against the target microbe, which is a plus. It isn’t available for people yet and may not be for some time — drug testing being what it is. However, this is still a great example of how machine learning can augment human brainpower, letting scientists and others focus on what’s really important.
If I were a brilliant physicist, I would have written this.
Learn more about differential equations (and many other topics in maths and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.
Do humans have free will or to the laws of physics imply that such a concept is not much more than a fairy tale? Do we make decisions? Did the big bang start a chain reaction of cause and effects leading to the creation of this video? That’s what we’ll talk about today.
Summary: Researchers developed a machine learning algorithm, FoodProX, capable of predicting the degree of processing in food products.
The tool scores foods on a scale from zero (minimally or unprocessed) to 100 (highly ultra-processed). FoodProX bridges gaps in existing nutrient databases, providing higher resolution analysis of processed foods.
This development is a significant advancement for researchers examining the health impacts of processed foods.
The system demonstrated its chops on Kepler’s third law of planetary motion, Einstein’s relativistic time-dilation law, and Langmuir’s equation of gas adsorption.
AI-Descartes, a new AI scientist, has successfully reproduced Nobel Prize-winning work using logical reasoning and symbolic regression to find accurate equations. The system is effective with real-world data and small datasets, with future goals including automating the construction of background theories.
In 1918, the American chemist Irving Langmuir published a paper examining the behavior of gas molecules sticking to a solid surface. Guided by the results of careful experiments, as well as his theory that solids offer discrete sites for the gas molecules to fill, he worked out a series of equations that describe how much gas will stick, given the pressure.
On Monday, Apple is more than likely going to reveal its long-awaited augmented or mixed reality Reality Pro headset during the keynote of its annual WWDC developer conference in California. It’s an announcement that has been tipped or teased for years now, and reporting on the topic has suggested that at various times, the project has been subject to delays, internal skepticism and debate, technical challenges and more. Leaving anything within Apple’s sphere of influence aside, the world’s overall attitude toward AR and VR has shifted considerably — from optimism, to skepticism.
Part of that trajectory is just the natural progression of any major tech hype cycle, and you could easily argue that the time to make the most significant impact in any such cycle is after the spike of undue optimism and energy has subsided. But in the case of AR and VR, we’ve actually already seen some of the tech giants with the deepest pockets take their best shots and come up wanting — not for lack of trying, but because of limitations in terms of what’s possible even at the bleeding edge of available tech. Some of those limits might actually be endemic to AR and VR, too, because of variances in the human side of the equation required to make mixed reality magic happen.
The virtual elephant in the room is, of course, Meta. The name itself pretty much sums up the situation: Facebook founder Mark Zuckerberg read a bad book and decided that VR was the inevitable end state of human endeavor — the mobile moment he essentially missed out on, but even bigger and better. Zuckerberg grew enamored by his delusion, first acquiring crowdfunded VR darling Oculus, then eventually commandeering the sobriquet for a shared virtual universe from the dystopian predictions of a better book and renaming all of Facebook after it.
Perfect recall, computational wizardry and rapier wit: That’s the brain we all want, but how does one design such a brain? The real thing is comprised of ~80 billion neurons that coordinate with one another through tens of thousands of connections in the form of synapses. The human brain has no centralized processor, the way a standard laptop does.
Instead, many calculations are run in parallel, and outcomes are compared. While the operating principles of the human brain are not fully understood, existing mathematical algorithms can be used to rework deep learning principles into systems more like a human brain would. This brain-inspired computing paradigm—spiking neural networks (SNN)—provides a computing architecture well-aligned with the potential advantages of systems using both optical and electronic components.
In SNNs, information is processed in the form of spikes or action potentials, which are the electrical impulses that occur in real neurons when they fire. One of their key features is that they use asynchronous processing, meaning that spikes are processed as they occur in time, rather than being processed in a batch like in traditional neural networks. This allows SNNs to react quickly to changes in their inputs, and to perform certain types of computations more efficiently than traditional neural networks.