Toggle light / dark theme

The Eqs. (3a) and (3b) suggest two important features of the location of neutrons and the spin by switching the choice of the post-selection: (i) The first lines indicate that the neutrons are found to be localized in different paths by switching the choice of the post-selection; they are found in the path I and II by applying the post-selection \({|{\Psi ^{+}_f}\rangle }\) and \({|{\Psi ^{-}_f}\rangle }\), respectively. (ii) The lines of the second part of the equations indicate that the spin in the different paths is found to be affected by switching the choice of the post-selection; the spin in path II and I is affected by applying the post-selection \({|{\Psi ^{+}_f}\rangle }\) and \({|{\Psi ^{-}_f}\rangle }\), respectively. Note that, in both choices of the post-selection, neutron and spin are localized in different paths, i.e., the location of the cat itself and its grin are interchanged by switching the choices of the post-selection. Since measurement of the locations of the neutron and the spin in the interferometer can be carried out independently of the delayed-choice process, the picking of a direction for post-selection, the influence of the delayed-choice on the preceding measurements can be investigated. We would like to point out that the experimental proposal in a recent publication35, contains a delayed choice scenario, too. The difference to the experiment presented in this report is that the authors of35 suggest a setup where two properties of the same system, represented by two non-commuting observables, are separated. In contrast to that, we deal in our experiment with the separation of one property from the system itself, hereby constituting the phenomenon of disembodiment. Further we would like to point out that in their Gedanken-experiment the effect of a change in the pre-selection is discussed that in our view has no retro-causal implications.

The experiment was carried out at the S18 silicon-perfect-crystal interferometer beam line at the high flux reactor at the Institute Laue Langevin. A schematic view of the experimental set-up is shown in Fig. 2.

Year 2014 face_with_colon_three If black holes have infinitely small sizes and infinitely density this also means that string theory would also solve the infinitely small problem because now we know that infinitely small sizes exist and if that exists then so does infinite energy from super string essentially filling out the rest of the mystery of the God equation. This means that computers could be infinitely small aswell saving a ton of space aswell.


If you’ve wondered how big is a black hole? then you’ve come to the right place! Learn about the sizes of black holes and the multi-layered answer.

AI or bust. Right now, AI is what everyone is talking about, and for good reason. After years of seeing AI doled out to help automate the processes that make businesses run smarter, we’re finally seeing AI that can help the average business employee working in the real world. Generative AI, or the process of using algorithms to produce data often in the form of images or text, has exploded in the last few months. What started with OpenAI’s ChatGPT has bloomed into a rapidly evolving subcategory of technology. And companies from Microsoft to Google to Salesforce and Adobe are hopping on board.


What started with ChatGPT has bloomed into an entire subcategory of technology with Meta, AWS, Salesforce, Google, Microsoft all racing to out innovate and deliver exciting generative AI capabilities to consumers, enterprise, developers, and more. Exploring the rapid progress in the AI space.

We don’t learn by brute force repetition. AI shouldn’t either.


Despite impressive progress, today’s AI models are very inefficient learners, taking huge amounts of time and data to solve problems humans pick up almost instantaneously. A new approach could drastically speed things up by getting AI to read instruction manuals before attempting a challenge.

One of the most promising approaches to creating AI that can solve a diverse range of problems is reinforcement learning, which involves setting a goal and rewarding the AI for taking actions that work towards that goal. This is the approach behind most of the major breakthroughs in game-playing AI, such as DeepMind’s AlphaGo.

Compare news coverage. Spot media bias. Avoid algorithms. Be well informed. Download the free Ground News app at https://ground.news/HOTU

Researched and Written by Leila Battison.
Narrated and Edited by David Kelly.
Animations by Jero Squartini https://www.fiverr.com/share/0v7Kjv.
Incredible thumbnail art by Ettore Mazza, the GOAT: https://www.instagram.com/ettore.mazza/?hl=en.

Huge thanks to Antonio Padilla for inspiring the section on TREE — his book is wonderful, I have already read it twice:

If you like our videos, check out Leila’s Youtube channel:

Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions —are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.