Toggle light / dark theme

Year 2014 face_with_colon_three If black holes have infinitely small sizes and infinitely density this also means that string theory would also solve the infinitely small problem because now we know that infinitely small sizes exist and if that exists then so does infinite energy from super string essentially filling out the rest of the mystery of the God equation. This means that computers could be infinitely small aswell saving a ton of space aswell.


If you’ve wondered how big is a black hole? then you’ve come to the right place! Learn about the sizes of black holes and the multi-layered answer.

AI or bust. Right now, AI is what everyone is talking about, and for good reason. After years of seeing AI doled out to help automate the processes that make businesses run smarter, we’re finally seeing AI that can help the average business employee working in the real world. Generative AI, or the process of using algorithms to produce data often in the form of images or text, has exploded in the last few months. What started with OpenAI’s ChatGPT has bloomed into a rapidly evolving subcategory of technology. And companies from Microsoft to Google to Salesforce and Adobe are hopping on board.


What started with ChatGPT has bloomed into an entire subcategory of technology with Meta, AWS, Salesforce, Google, Microsoft all racing to out innovate and deliver exciting generative AI capabilities to consumers, enterprise, developers, and more. Exploring the rapid progress in the AI space.

We don’t learn by brute force repetition. AI shouldn’t either.


Despite impressive progress, today’s AI models are very inefficient learners, taking huge amounts of time and data to solve problems humans pick up almost instantaneously. A new approach could drastically speed things up by getting AI to read instruction manuals before attempting a challenge.

One of the most promising approaches to creating AI that can solve a diverse range of problems is reinforcement learning, which involves setting a goal and rewarding the AI for taking actions that work towards that goal. This is the approach behind most of the major breakthroughs in game-playing AI, such as DeepMind’s AlphaGo.

As powerful as the technique is, it essentially relies on trial and error to find an effective strategy. This means these algorithms can spend the equivalent of several years blundering through video and board games until they hit on a winning formula.

Compare news coverage. Spot media bias. Avoid algorithms. Be well informed. Download the free Ground News app at https://ground.news/HOTU

Researched and Written by Leila Battison.
Narrated and Edited by David Kelly.
Animations by Jero Squartini https://www.fiverr.com/share/0v7Kjv.
Incredible thumbnail art by Ettore Mazza, the GOAT: https://www.instagram.com/ettore.mazza/?hl=en.

Huge thanks to Antonio Padilla for inspiring the section on TREE — his book is wonderful, I have already read it twice:

If you like our videos, check out Leila’s Youtube channel:

https://www.youtube.com/channel/UCXIk7euOGq6jkptjTzEz5kQ

Music from Epidemic Sound and Artlist.

Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions —are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50
Support us! https://www.patreon.com/mlst.
MLST Discord: https://discord.gg/aNPkGUQtc5

TOC:
Intro [00:00:00]
Numerai (Sponsor segment) [00:07:10]
Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48]
Information / Infosphere and human agency [00:18:30]
Intelligence [00:31:38]
Reductionism [00:39:36]
Universalism [00:44:46]
Emergence [00:54:23]
Markov blankets [01:02:11]
Whole part relationships / structure learning [01:22:33]
Enactivism [01:29:23]
Knowledge and Language [01:43:53]
ChatGPT [01:50:56]
Ethics (is-ought) [02:07:55]
Can people be evil? [02:35:06]
Ethics in Al, subjectiveness [02:39:05]
Final thoughts [02:57:00]

References:

LLMs stands for Large Language Models. These are advanced machine learning models that are trained to comprehend massive volumes of text data and generate natural language. Examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). LLMs are trained on massive amounts of data, often billions of words, to develop a broad understanding of language. They can then be fine-tuned on tasks such as text classification, machine translation, or question-answering, making them highly adaptable to various language-based applications.

LLMs struggle with arithmetic reasoning tasks and frequently produce incorrect responses. Unlike natural language understanding, math problems usually have only one correct answer, making it difficult for LLMs to generate precise solutions. As far as it is known, no LLMs currently indicate their confidence level in their responses, resulting in a lack of trust in these models and limiting their acceptance.

To address this issue, scientists proposed ‘MathPrompter,’ which enhances LLM performance on mathematical problems and increases reliance on forecasts. MathPrompter is an AI-powered tool that helps users solve math problems by generating step-by-step solutions. It uses deep learning algorithms and natural language processing techniques to understand and interpret math problems, then generates a solution explaining each process step.

[Russ Maschmeyer] and Spatial Commerce Projects developed WonkaVision to demonstrate how 3D eye tracking from a single webcam can support rendering a graphical virtual reality (VR) display with realistic depth and space. Spatial Commerce Projects is a Shopify lab working to provide concepts, prototypes, and tools to explore the crossroads of spatial computing and commerce.

The graphical output provides a real sense of depth and three-dimensional space using an optical illusion that reacts to the viewer’s eye position. The eye position is used to render view-dependent images. The computer screen is made to feel like a window into a realistic 3D virtual space where objects beyond the window appear to have depth and objects before the window appear to project out into the space in front of the screen. The resulting experience is like a 3D view into a virtual space. The downside is that the experience only works for one viewer.

Eye tracking is performed using Google’s MediaPipe Iris library, which relies on the fact that the iris diameter of the human eye is almost exactly 11.7 mm for most humans. Computer vision algorithms in the library use this geometrical fact to efficiently locate and track human irises with high accuracy.

To keep his Universe static, Einstein added a term into the equations of general relativity, one he initially dubbed a negative pressure. It soon became known as the cosmological constant. Mathematics allowed the concept, but it had absolutely no justification from physics, no matter how hard Einstein and others tried to find one. The cosmological constant clearly detracted from the formal beauty and simplicity of Einstein’s original equations of 1915, which achieved so much without any need for arbitrary constants or additional assumptions. It amounted to a cosmic repulsion chosen to precisely balance the tendency of matter to collapse on itself. In modern parlance we call this fine tuning, and in physics it is usually frowned upon.

Einstein knew that the only reason for his cosmological constant to exist was to secure a static and stable finite Universe. He wanted this kind of Universe, and he did not want to look much further. Quietly hiding in his equations, though, was another model for the Universe, one with an expanding geometry. In 1922, the Russian physicist Alexander Friedmann would find this solution. As for Einstein, it was only in 1931, after visiting Hubble in California, that he accepted cosmic expansion and discarded at long last his vision of a static Cosmos.

Einstein’s equations provided a much richer Universe than the one Einstein himself had originally imagined. But like the mythic phoenix, the cosmological constant refuses to go away. Nowadays it is back in full force, as we will see in a future article.