The Concept of Superintelligence
Updated: Mar 24, 2021
UCL FinTech Society's Emma Prevot dives deeper into the concept of Superintelligence and how we might achieve it. Moreover, she investigates whether such an advanced cognitive system could be a threat or an advantage for our society.
Superintelligence can be defined as:
"any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest"
This is the most used definition, proposed by Nick Bostrom, author of the book “Superintelligence: Paths, Dangers, Strategies”. Have we already exceeded human intelligence? Or something even comparable to human intellect? The answer is no, for both.
The current level of machine AI is referred to as Artificial narrow intelligence (ANI), or weak AI, i.e., goal-oriented and designed to perform singular tasks. As a matter of fact, ANI is already smarter than humans, but just in specific tasks. The first example of an AI system defeating a human dates back to 1997 when IBM’s Deep Blues won against Gary Kasparov in chess. From then on, several other algorithms have been written in many different games and some of them achieved superhuman levels. For instance, Scrabble or Jeopardy! from IBM’s Watson, which defeated the two all-time-greatest players in 2010.
One may argue that these are just games and that is true. Nevertheless, AI is currently better than human in many other fields, such as visual recognition; reading - Alibaba built an AI that can read better than the average human; LawGeek neural network obtained a 94% accuracy rate (humans achieved on average 85%); present day AI can detect cancers better than human doctor; machines are faster and more accurate in calculus… Even animals can perform at superhuman levels, for instance bats can interpret sonar signals better than we do.
Nonetheless, humans possess general intelligence, which has yet not been reached by machines. It can be defined as Artificial general intelligence (AGI), or strong/deep AI.
Artificial super intelligence (ASI) goes beyond it; it doesn’t just mimic human intelligence and behaviours, machines are self-aware and surpass human capacity in every field. This type of intellect can be further split into different forms according to different parameters like how they can be achieved, in which characteristic they surpass human intellect, etc…
The first disaggregation of the simple notion of superintelligence can be done distinguishing it in three super-capabilities:
1. Speed Superintelligence:
A system that can do everything humans do but much faster.
2. Collective Superintelligence:
A system formed of many smaller intellects whose overall performance outsmart any current intellect
Current level of human intelligence (as a collective) is approaching super-human intelligence compared to the Pleistocene baseline.
3. Quality Superintelligence:
A system at least as fast as human intellect but qualitatively smarter.
Another very interesting distinction is based on how we could reach superintelligence and what could be the mean; we distinguish between:
1. Artificial Superintelligence
2. Biological Superintelligence
Artificial Superintelligence involves a machine (e.g., a computer) whose cognitive abilities surpass that of humans in all respects. The very known way to achieve such intelligence is referred to as “Intelligent Explosion”.
If one day we’ll be able to create a Narrow AI outsmarting us at designing AI algorithms, this system could improve itself faster and better and become more and more skilled. Eventually, this positive feedback loop will result in machine superintelligence. Such a cognitive machine would not have any human limitation and may be able to invent and discover everything.
Another possible pathway to Artificial Superintelligence is through brain-emulation. As Nick Bostrom points out, computer components are already considerably faster than biological neurons.
An emulated human mind run on much faster hardware could think billions of times faster and would be an example of Speed Artificial Superintelligence.
Biological Superintelligence could be instead achieved by enhancing individual humans or enhancing their social/reproductive behaviour. One pathway is Biological Cognitive Enhancement which is the modification of genes or molecules to improve general intelligence. Although this might seem harder to achieve, it could trigger an intelligent explosion.
Greater-than-human intelligence could also be achieved via a brain-computer interface (BCI or BMI, “m” for machine), which could be considered a middle point between Artificial and Biological Superintelligence. It is a direct communication between the brain and a computer device. Invasive BCI requires surgery to implant an electrode in the brain able to transmit signals, partially invasive BCI are implanted in the skull, non-invasive BCI exploits neuroimaging techniques, like EEG, as interfaces.
Neuralink, one of the latest Elon Musk’s projects, is an example of an implantable (i.e., invasive) BMI. Such technology is not only the first step to superintelligence, but it also represents the solution and treatment to a wide range of neurological disorders. It has the potential to revolutionise the way we communicate with each other and with the environment and it could eventually enhance our cognitive abilities.
It is worth adding that there is no absolute Superintelligence. When (and if) we’ll reach greater-than-human level, let’s call this SuperLevel-1, it will eventually become, after some time, the standard intelligence and then there will be SuperLevel-2 to be achieved and so on. We don’t know if there is a limit in intelligence, we don’t even have a complete definition of what intelligence is. It is more a philosophical argument, rather than a scientific one.
Since delving deeper into the concept of Superintelligence can become truly mind-blowing, let’s now try to understand where we are in our path to greater-than-human intellect and if we need to prepare for its advent.
How far are we from reaching superintelligence? Do we need to harm ourselves against it?
There is a strong disagreement between technological researchers about how likely present-day human intelligence is to be surpassed. Some believe that it will take a very long time since the only way to achieve so is biological, i.e., humans will evolve and modify their biology to reach superintelligence, compared to present days. Others are convinced that we need to foster an “intelligent explosion”. Some even argue that a machine will never be able to achieve general intelligence and become self-conscious since only a biological system can.
Nevertheless, the creation of algorithms that can replicate the complex computational abilities of the human brain is theoretically possible and this is suggested in the Church-Turing thesis: given infinite time and memory, any kind of problem can be solved algorithmically. Despite being theoretically possible, we are very far away from reaching even artificial general intelligence, imagine superintelligence. A survey of 995 AI experts recently predicted that we may expect the emergence of AGI by 2060.
We can conclude that Superintelligence is something we are not likely to achieve in this century. But someone is already worried about its potential advent, for instance in 2018 Elon Musk said “Mark my words — A.I. is far more dangerous than nukes”; Stephen Hawking was one of the major voices in the debate about how humanity could benefit from AI, he predicted that Artificial Intelligence “could spell the end of the human race”.
Nick Bostrom stated: “Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
This introduces the AI control problem: how to build an agent that will help the creators, not harm them; and Machine Ethics, concerned with adding moral behaviors to machines which use artificial intelligence.
Work must be urgently done to protect ourselves. The power and potential of AI is immense, we could benefit from its evolution and transformation, but we need to harness its awesome power for the safest possible outcomes.
Must read if interested in the topic:
Nick Bostrom (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.