Toggle light / dark theme

To keep his Universe static, Einstein added a term into the equations of general relativity, one he initially dubbed a negative pressure. It soon became known as the cosmological constant. Mathematics allowed the concept, but it had absolutely no justification from physics, no matter how hard Einstein and others tried to find one. The cosmological constant clearly detracted from the formal beauty and simplicity of Einstein’s original equations of 1915, which achieved so much without any need for arbitrary constants or additional assumptions. It amounted to a cosmic repulsion chosen to precisely balance the tendency of matter to collapse on itself. In modern parlance we call this fine tuning, and in physics it is usually frowned upon.

Einstein knew that the only reason for his cosmological constant to exist was to secure a static and stable finite Universe. He wanted this kind of Universe, and he did not want to look much further. Quietly hiding in his equations, though, was another model for the Universe, one with an expanding geometry. In 1922, the Russian physicist Alexander Friedmann would find this solution. As for Einstein, it was only in 1931, after visiting Hubble in California, that he accepted cosmic expansion and discarded at long last his vision of a static Cosmos.

Einstein’s equations provided a much richer Universe than the one Einstein himself had originally imagined. But like the mythic phoenix, the cosmological constant refuses to go away. Nowadays it is back in full force, as we will see in a future article.

ChatGPT might well be the most famous, and potentially valuable, algorithm of the moment, but the artificial intelligence techniques used by OpenAI to provide its smarts are neither unique nor secret. Competing projects and open-source clones may soon make ChatGPT-style bots available for anyone to copy and reuse.

Stability AI, a startup that has already developed and open-sourced advanced image-generation technology, is working on an open competitor to ChatGPT. “We are a few months from release,” says Emad Mostaque, Stability’s CEO. A number of competing startups, including Anthropic, Cohere, and AI21, are working on proprietary chatbots similar to OpenAI’s bot.

The impending flood of sophisticated chatbots will make the technology more abundant and visible to consumers, as well as more accessible to AI businesses, developers, and researchers. That could accelerate the rush to make money with AI tools that generate images, code, and text.

In episode 13 of the Quantum Consciousness series, Justin Riddle discusses how microtubules are the most likely candidate to be a universal quantum computer that acts as a single executive unit in cells. First off, computer scientists are trying to model human behavior using neural networks that treat individual neurons as the base unit. But unicellular organisms are able to do many of the things that we consider to be human behavior! How does a single-cell lifeform perform this complex behavior? As Stuart Hameroff puts it, “neuron doctrine is an insult to neurons,” referring to the complexity of a single cell. Let’s look inside a cell, what makes it tick? Many think the DNA holds some secret code or algorithm that is executing the decision-making process of the cell. However, the microscope reveals a different story where the microtubules are performing a vast array of complex behaviors: swimming towards food, away from predators, coordinating protein delivery and creation within the cell. This begs the question: how do microtubules work? Well, they are single proteins organized into helical cylinders. What is going on here? Typically, we think of a protein’s function as being determined by its structure but the function of a single protein repeated into tubes is tough to unravel. Stuart Hameroff proposed that perhaps these tubulin proteins are acting as bits of information and the whole tube is working as a universal computer that can be programmed to fit any situation. Given the limitations of digital computation, Roger Penrose was looking for a quantum computer in biology and Stuart Hameroff was looking for more than a digital computation explanation. Hence, the Hameroff-Penrose model of microtubules as quantum computers was born. If microtubules are quantum computers, then each cell would possess a central executive hub for rapidly integrating information from across the cell and to turn that information into a single action plan that could be quickly disseminated. Furthermore, the computation would get a “quantum” speed-up in that exponentially large search spaces could be tackled in a reasonable timeframe. If microtubules are indeed quantum computers, then modern science has greatly underestimated the processing power of a single cell, let alone the entire human brain.

~~~ Timestamps ~~~
0:00 Introduction.
3:08 “Neuron doctrine is an insult to neurons”
8:23 DNA vs Microtubules.
14:20 Diffusion vs Central Hub.
17:50 Microtubules as Universal Computers.
23:40 Penrose’s Quantum Computation update.
29:48 Quantum search in a cell.
33:25 Stable microtubules in neurons.
35:18 Finding the self in biology.

#quantum.
#consciousness.
#microtubules.

Website: www.justinriddlepodcast.com.

High-performance, realistic computer simulations are crucially important for science and engineering, even allowing scientists to predict how individual molecules will behave.

Watch the Q&A here: https://youtu.be/aRGH5lC0pLc.
Subscribe for regular science videos: http://bit.ly/RiSubscRibe.

Scientists have always used models. Since the ancient Ptolemaic model of the universe through to renaissance astrolabes, models have mapped out the consequences of predictions. They allow scientists to explore indirectly worlds which they could never access.

Join Sir Richard Catlow as he explores how high-performance computer simulations have transformed the way scientists comprehend our world. From testing hypotheses at planetary scale to developing a personalised approach for the fight against Covid.

By default, every quantum computer is going to be a hybrid that combines quantum and classical compute. Microsoft estimates that a quantum computer that will be able to help solve some of the world’s most pressing questions will require at least a million stable qubits. It’ll take massive classical compute power — which is really only available in the cloud — to control a machine like this and handle the error correction algorithms needed to keep it stable. Indeed, Microsoft estimates that to achieve the necessary fault tolerance, a quantum computer will need to be integrated with a peta-scale compute platform that can manage between 10 to 100 terabits per second of data moving between the quantum and classical machine. At the American Physical Society March Meeting in Las Vegas, Microsoft today is showing off some of the work it has been doing on enabling this and launching what it calls the “Integrated Hybrid” feature in Azure Quantum.

“With this Integrated Hybrid feature, you can start to use — within your quantum applications — classical code right alongside quantum code,” Krysta Svore, Microsoft’s VP of Advanced Quantum Development, told me. “It’s mixing that classical and quantum code together that unlocks new types, new styles of quantum algorithms, prototypes, sub routines, if you will, where you can control what you do to qubits based on classical information. This is a first in the industry.”

Robots are all around us, from drones filming videos in the sky to serving food in restaurants and diffusing bombs in emergencies. Slowly but surely, robots are improving the quality of human life by augmenting our abilities, freeing up time, and enhancing our personal safety and well-being. While existing robots are becoming more proficient with simple tasks, handling more complex requests will require more development in both mobility and intelligence.

Columbia Engineering and Toyota Research Institute computer scientists are delving into psychology, physics, and geometry to create algorithms so that robots can adapt to their surroundings and learn how to do things independently. This work is vital to enabling robots to address new challenges stemming from an aging society and provide better support, especially for seniors and people with disabilities.

A longstanding challenge in computer vision is object permanence, a well-known concept in psychology that involves understanding that the existence of an object is separate from whether it is visible at any moment. It is fundamental for robots to understand our ever-changing, dynamic world. But most applications in computer vision ignore occlusions entirely and tend to lose track of objects that become temporarily hidden from view.

Artificial intelligence can create images based on text prompts, but scientists unveiled a gallery of pictures the technology produces by reading brain activity. The new AI-powered algorithm reconstructed around 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.

HRL Laboratories, LLC, has published the first demonstration of universal control of encoded spin qubits. This newly emerging approach to quantum computation uses a novel silicon-based qubit device architecture, fabricated in HRL’s Malibu cleanroom, to trap single electrons in quantum dots. Spins of three such single electrons host energy-degenerate qubit states, which are controlled by nearest-neighbor contact interactions that partially swap spin states with those of their neighbors.

Posted online ahead of publication in the journal Nature, the HRL experiment demonstrated universal control of their encoded qubits, which means the qubits can be used successfully for any kind of quantum computational algorithm implementation. The encoded silicon/silicon germanium quantum dot qubits use three electron spins and a control scheme whereby voltages applied to metal gates partially swap the directions of those electron-spins without ever aligning them in any particular direction. The demonstration involved applying thousands of these precisely calibrated voltage pulses in strict relation to one another over the course of a few millionths of a second. The article is entitled “Universal logic with encoded spin qubits in silicon.”

The quantum coherence offered by the isotopically enriched silicon used, the all-electrical and low-crosstalk-control of partial swap operations, and the configurable insensitivity of the encoding to certain error sources combine to offer a strong pathway toward scalable fault tolerance and computational advantage, major steps toward a commercial quantum computer.

As the use of machine learning (ML) algorithms continues to grow, computer scientists worldwide are constantly trying to identify and address ways in which these algorithms could be used maliciously or inappropriately. Due to their advanced data analysis capabilities, in fact, ML approaches have the potential to enable third parties to access private data or carry out cyberattacks quickly and effectively.

Morteza Varasteh, a researcher at the University of Essex in the U.K., has recently identified new type of inference attack that could potentially compromise confidential user data and share it with other parties. This attack, which is detailed in a paper pre-published on arXiv, exploits vertical federated learning (VFL), a distributed ML scenario in which two different parties possess different information about the same individuals (clients).

“This work is based on my previous collaboration with a colleague at Nokia Bell Labs, where we introduced an approach for extracting private user information in a data center, referred to as the passive party (e.g., an ),” Varasteh told Tech Xplore. “The passive party collaborates with another , referred to as the active party (e.g., a bank), to build an ML algorithm (e.g., a credit approval algorithm for the bank).”

In recent years, the field of artificial intelligence has made tremendous strides, but what happens when #AI systems become #selfaware? In this video, we’ll explore the concept of AI self-awareness, its #scary implications for society, and what it means for the #future of AI.

AI self-awareness is the ability of an #artificialintelligence system to recognize its own existence and understand the consequences of its actions. While there are different levels of self-awareness that an AI system could potentially exhibit, it generally involves the system being able to recognize and respond to changes in its own state.
One way that researchers are exploring AI self-awareness is by using neural networks and other machine learning algorithms. For example, researchers have created AI systems that can recognize and respond to their own errors, which is an important first step in developing higher-order self-awareness.

#Kassandra #JoshBachynski, #aiscarystories #aihorrorstories #scarystories #scarystory #horrorstories #horrorstory #realstories #realhorrorstories #realscarystories #truestories #truestory #creapystories #AIScarystory #AIHorror #artificialintelligence #scaryai #scaryartificialintelligence #trueaiscarystories #truescarystories.

📺 Watch the entire video for more information!