Menu

Blog

Aug 25, 2023

Toward AGI — What is Missing?

Posted by in categories: economics, physics, robotics/AI, singularity

Artificial General Intelligence (AGI) is a term for Artificial Intelligence systems that meet or exceed human performance on the broad range of tasks that humans are capable of performing. There are benefits and downsides to AGI. On the upside, AGIs can do most of the labor that consume a vast amount of humanity’s time and energy. AGI can herald a utopia where no one has wants that cannot be fulfilled. AGI can also result in an unbalanced situation where one (or a few) companies dominate the economy, exacerbating the existing dichotomy between the top 1% and the rest of humankind. Beyond that, the argument goes, a super-intelligent AGI could find it beneficial to enslave humans for its own purposes, or exterminate humans so as to not compete for resources. One hypothetical scenario is that an AGI that is smarter than humans can simply design a better AGI, which can, in turn, design an even better AGI, leading to something called hard take-off and the singularity.

I do not know of any theory that claims that AGI or the singularity is impossible. However, I am generally skeptical of arguments that Large Language Models such the GPT series (GPT-2, GPT-3, GPT-4, GPT-X) are on the pathway to AGI. This article will attempt to explain why I believe that to be the case, and what I think is missing should humanity (or members of the human race) so choose to try to achieve AGI. I will also try to convey a sense for why it is easy to talk about the so-called “recipe for AGI” in the abstract but why physics itself will prevent any sudden and unexpected leap from where we are now to AGI or super-AGI.

To achieve AGI it seems likely we will need one or more of the following:

Comments are closed.