Toggle light / dark theme

Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions —are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.

To keep his Universe static, Einstein added a term into the equations of general relativity, one he initially dubbed a negative pressure. It soon became known as the cosmological constant. Mathematics allowed the concept, but it had absolutely no justification from physics, no matter how hard Einstein and others tried to find one. The cosmological constant clearly detracted from the formal beauty and simplicity of Einstein’s original equations of 1915, which achieved so much without any need for arbitrary constants or additional assumptions. It amounted to a cosmic repulsion chosen to precisely balance the tendency of matter to collapse on itself. In modern parlance we call this fine tuning, and in physics it is usually frowned upon.

Einstein knew that the only reason for his cosmological constant to exist was to secure a static and stable finite Universe. He wanted this kind of Universe, and he did not want to look much further. Quietly hiding in his equations, though, was another model for the Universe, one with an expanding geometry. In 1922, the Russian physicist Alexander Friedmann would find this solution. As for Einstein, it was only in 1931, after visiting Hubble in California, that he accepted cosmic expansion and discarded at long last his vision of a static Cosmos.

Einstein’s equations provided a much richer Universe than the one Einstein himself had originally imagined. But like the mythic phoenix, the cosmological constant refuses to go away. Nowadays it is back in full force, as we will see in a future article.

ChatGPT might well be the most famous, and potentially valuable, algorithm of the moment, but the artificial intelligence techniques used by OpenAI to provide its smarts are neither unique nor secret. Competing projects and open-source clones may soon make ChatGPT-style bots available for anyone to copy and reuse.

Stability AI, a startup that has already developed and open-sourced advanced image-generation technology, is working on an open competitor to ChatGPT. “We are a few months from release,” says Emad Mostaque, Stability’s CEO. A number of competing startups, including Anthropic, Cohere, and AI21, are working on proprietary chatbots similar to OpenAI’s bot.

The impending flood of sophisticated chatbots will make the technology more abundant and visible to consumers, as well as more accessible to AI businesses, developers, and researchers. That could accelerate the rush to make money with AI tools that generate images, code, and text.

In episode 13 of the Quantum Consciousness series, Justin Riddle discusses how microtubules are the most likely candidate to be a universal quantum computer that acts as a single executive unit in cells. First off, computer scientists are trying to model human behavior using neural networks that treat individual neurons as the base unit. But unicellular organisms are able to do many of the things that we consider to be human behavior! How does a single-cell lifeform perform this complex behavior? As Stuart Hameroff puts it, “neuron doctrine is an insult to neurons,” referring to the complexity of a single cell. Let’s look inside a cell, what makes it tick? Many think the DNA holds some secret code or algorithm that is executing the decision-making process of the cell. However, the microscope reveals a different story where the microtubules are performing a vast array of complex behaviors: swimming towards food, away from predators, coordinating protein delivery and creation within the cell. This begs the question: how do microtubules work? Well, they are single proteins organized into helical cylinders. What is going on here? Typically, we think of a protein’s function as being determined by its structure but the function of a single protein repeated into tubes is tough to unravel. Stuart Hameroff proposed that perhaps these tubulin proteins are acting as bits of information and the whole tube is working as a universal computer that can be programmed to fit any situation. Given the limitations of digital computation, Roger Penrose was looking for a quantum computer in biology and Stuart Hameroff was looking for more than a digital computation explanation. Hence, the Hameroff-Penrose model of microtubules as quantum computers was born. If microtubules are quantum computers, then each cell would possess a central executive hub for rapidly integrating information from across the cell and to turn that information into a single action plan that could be quickly disseminated. Furthermore, the computation would get a “quantum” speed-up in that exponentially large search spaces could be tackled in a reasonable timeframe. If microtubules are indeed quantum computers, then modern science has greatly underestimated the processing power of a single cell, let alone the entire human brain.

~~~ Timestamps ~~~
0:00 Introduction.
3:08 “Neuron doctrine is an insult to neurons”
8:23 DNA vs Microtubules.
14:20 Diffusion vs Central Hub.
17:50 Microtubules as Universal Computers.
23:40 Penrose’s Quantum Computation update.
29:48 Quantum search in a cell.
33:25 Stable microtubules in neurons.
35:18 Finding the self in biology.

#quantum.
#consciousness.
#microtubules.

Website: www.justinriddlepodcast.com.

High-performance, realistic computer simulations are crucially important for science and engineering, even allowing scientists to predict how individual molecules will behave.

Watch the Q&A here: https://youtu.be/aRGH5lC0pLc.
Subscribe for regular science videos: http://bit.ly/RiSubscRibe.

Scientists have always used models. Since the ancient Ptolemaic model of the universe through to renaissance astrolabes, models have mapped out the consequences of predictions. They allow scientists to explore indirectly worlds which they could never access.

Join Sir Richard Catlow as he explores how high-performance computer simulations have transformed the way scientists comprehend our world. From testing hypotheses at planetary scale to developing a personalised approach for the fight against Covid.

By default, every quantum computer is going to be a hybrid that combines quantum and classical compute. Microsoft estimates that a quantum computer that will be able to help solve some of the world’s most pressing questions will require at least a million stable qubits. It’ll take massive classical compute power — which is really only available in the cloud — to control a machine like this and handle the error correction algorithms needed to keep it stable. Indeed, Microsoft estimates that to achieve the necessary fault tolerance, a quantum computer will need to be integrated with a peta-scale compute platform that can manage between 10 to 100 terabits per second of data moving between the quantum and classical machine. At the American Physical Society March Meeting in Las Vegas, Microsoft today is showing off some of the work it has been doing on enabling this and launching what it calls the “Integrated Hybrid” feature in Azure Quantum.

“With this Integrated Hybrid feature, you can start to use — within your quantum applications — classical code right alongside quantum code,” Krysta Svore, Microsoft’s VP of Advanced Quantum Development, told me. “It’s mixing that classical and quantum code together that unlocks new types, new styles of quantum algorithms, prototypes, sub routines, if you will, where you can control what you do to qubits based on classical information. This is a first in the industry.”