AI Singularities and Quantum Computing
In mathematics, a singularity is a point at which a mathematical object (a function or a surface or an equation) is "misbehaved" in some sense; but we prefer to dignify the situation and instead say that the behavior of the object at that point is undefined. On the right you see the graph of the function \(f(x) = 1 / x^2\), defined for real values of \(x\). The function is not defined at the point \(x = 0\). You can recognize that there is something special about the behavior of the function around the point \(x = 0\), it blows up as \(x\) gets closer to \(0\); this picture is worth keeping in mind when we speak of AI singularities. The idea of a technological singularity borrows from the mathematical term and it is attributed (by the mathematician Stan Ulam) to John von Neumann. When Von Neumann warned of a technological singularity, he was thinking of the progress of overall technology to a point of no return and with an associated loss of human control, although he did not refer explicitly to this being due to the creation of superhuman AI:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
The possibility of an AI-based singularity, meaning an AI surpassing human intelligence and escaping human control has first appeared in an article by the British mathematician I.J. Good in 1965. Good worked with Alan Turing, at the famous Bletchley Park and then later at the University of Manchester.
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
The popularization of an AI singularity was done by the mathematician Vernor Vinge in his 1993 article The Coming Technological Singularity: How to Survive in the Post-Human Era. The concept of an AI singularity received its definitive formulation in 2005, in Ray Kurzweil's non-fiction book The Singularity Is Near: When Humans Transcend Biology.
Perhaps the most familiar concept of a singularity, and the one most visually convincing, is the one from physics; and it is these gravitational singularities which will give us the opportunity to fit some of the AI concepts into a similar more visually convincing picture.
These gravitational singularities originated with Einstein's general theory of relativity, which explained gravitation in geometric terms, namely as curvature of space-time itself. The curvature of space-time is higher around heavier objects (it is bent harder by heavier objects) and there are points in which the curvature becomes infinite, these points being called gravitational singularities.
The objects doing all the bending towards gravitational singularities are called black holes. What this picture of a gravitational singularity adds to the purely mathematical one above is the concept of an event horizon, which is also useful when discussing AI singularities. The event horizon is the boundary around the black hole past which the gravitational bend is so strong that not even light can escape; we cannot observe or say much about what happens beyond the event horizon. So it's worth keeping in mind both the singularity itself and the event horizon which comes with it.
(The Big Bang is the most known and talked about singularity. It is similar to a black hole singularity in that it posits infinite mass, but it has additional properties which would divert our attention too far. For us, the picture of a black hole as a singularity is more than enough.)
Now let's modify the picture of a black hole singularity above in order to obtain a similar picture for the AI singularity. It is a picture that is useful to keep in mind when encountering various mentions of the different types of AI: ANI, AGI, and ASI; we have kept out AWI for now. Where is our civilization at the present time? We have left the quiet region where the effects of AI were negligible. We are at the current moment inside the equivalent of the ergosphere; ergosphere (as used for a black hole singularity) is a more technical term whose description you can find online, but we can think of it as the space where we can still do something about AI, we are actively engaged with it, we are controlling it, and where we are developing various ANI systems. For us, the event horizon is nothing but AGI (human level AI). Once AI reaches the AGI stage, then it's out of human control and it quickly spirals towards ASI, the point of the singularity. Most of the speculative talk about our future AI refers to the area we called the AI distortion. Nothing we can say about it can actually be verified, just like in the case of a black hole.
It's worth mentioning that while black hole singularities are backed up by an accepted physics theory, AI singularities are useful but pure speculations. Until recently, the visualization of black holes was based mostly on large computer simulations. This will be the third concept to keep in mind for our purposes, that of a computer simulation; as we shall see below, AI singularities are tightly bound to the idea of naturalist reality versus simulated reality. Before we leave these analogies behind, it would also be useful to appreciate the recent success of actually seeing black holes through a precisely synchronized network of telescopes, not just simulated on computers. It may leave us with a comforting idea and sense of mission that our AI future is not just speculative, that there are small and large successes to be enjoyed at a practical level ... now, and without simulations.
Because the AI singularities we will be talking about now are more speculative than black holes or the Big Bang, there is much more controversy surrounding them. But perhaps following von Neumann, an AI singularity should be understood in a wider context of overall technological progress (which would include the AI singularity but other technologies as well, like quantum computing, nanotechnology, gene processing, etc.); it is likely that all these technological advances will feed off each other and accelerate the singularity. For us the only connection we make (with this set of technological advances) is with quantum computing, for two reasons. First, as we mentioned many times, AI at this moment owes its success to brute force in computing. Quantum computing is also about increasing brute force. Combining the two will accelerate both. Second reason is the potential connection between quantum computing and consciousness, which we have already seen in the AI Versus Human Intelligence article and which will appear again in the Artificial Consciousness article. Here are some explanations as to why technologies which advance at exponential rates motivate our speculations about an AI (or a technological) singularity:
In the background article How We Form Political Opinions we saw that it is beneficial to look at the human brain as an inference machine that forms a model of reality, as presented to our senses by the environment we live in, and actively looks for information in order to refine that model of reality. In that process of inference, it calculates various probabilities of various beliefs, much like the calculation in Bayes formula. In the end, each of us assigns different probabilities to the same belief statement or opinion and that accounts for the diversity of our positions.
But could there be a more definitive probability assigned to an event, one that factors in ALL the evidence there is? More precisely, could there be a powerful AI system that could look at ALL the evidence from the natural world and the one produced by humanity and assign the most precise probability to it? Could there be an AI system that produces the ultimate model of the Universe, including ourselves and all the natural laws. As we saw in the background article Graphs of Data, it is now conceivable that all the knowledge about us and all the knowledge that we produced about the world around us could in fact be produced and stored into what we called God's Graph (anticipating that we will elaborate and justify that provocative terminology in the article Superintelligence and God).
It is not hard to speculate how such a scenario would unfold, based on the video above. AI systems would reach a point where their optimization becomes exponentially stronger and then in no time would reach the final optimization, where they would have used all the knowledge produced by humans and learned everything there is to learn. We have called such intelligence Artificial Superintelligence (or ASI) in the diagram 3+1 different types of AI. The idealized probability that will be assigned to any statement we make by this Supperintelligence will be called its ASI probability.
We would need to make some political decisions which will determine the software specifications for our AI systems and verifiable proofs of correctness of those systems, in order to hope that ASI, when it emerges, is benevolent towards humanity. (This ASI will of course try to reach intelligence on other planetary systems, but the possibility of that search being successful and the associated politics are touching on science fiction and so they are outside of our scope.) The road to benevolence would take us through religion and morality and we defer this to the article Superintelligence and God.
The third type of an AI system, Artificial Superintelligence (ASI), takes a bit of forward projection and there is a sensationalist aspect to it which may turn some people off. But the possibility of such system is very real, to some of us its appearance might even appear as natural, perfectly logical, and inevitable. Imagine that a Controlled or Uncontrolled AI System is fed a large percentage of all the human knowledge available today (you may even push that percentage to 100, not a far-fetched assumption given that there are already efforts under way to accomplish exactly that goal.) It is very likely then that such a system will enter an exponential rate of optimization the result of which will be the Superintelligence, where the optimization will have reached a plateau, simply because it has looked at all possible data. This Intelligence will know all there is to know, it will have figured out all the mathematics and the physics that underline our world.
This explosion of intelligence, a sort of AI Big Bang, might be beneficial to humanity, leading to human immortality, or it might lead to our destruction. The Superintelligence term will allow us to refer to such a possibility with some precision and without having to re-explain what it means. It is therefore necessary to start thinking about setting the initial political conditions for such an AI Big Bang, so that it does work for our benefit. This does not seem to be a political issue for each country to solve, so we'll assume that some degree of sanity will descend on the blue dot at some point. But in the meantime, it would be up to us to influence the national-level decisions into a direction that ensures our survival as a nation, and hopefully as a species too.
The following video asks whether this Superintelligence will indeed happen, how long will our civilization be in the AI ergosphere, and other related questions. The ten participants (including the moderator) are among the most respected voices regarding the future of AI and it is a rare opportunity to see them all in one place at the same time:
Of course, we do not know when this AI Big Bang will take place. But there is an eerily general agreement that it will eventually happen, that is, with or without us humans around. For those of us working with AI, the estimates are continuously being revised towards shorter timeframes. And so there are big questions about our hope that we can somehow control the conditions which will lead to this AI Big Bang. It is possible that in the moments preceding this AI Big Bang, there will be many hyper-intelligent AI systems, not just one. It is also possible that they will appear in the very large data centers of the future, the seeds of which one can glimpse at by looking at the ones that are currently being developed by corporations like Google, Amazon, Microsoft, Apple, Facebook or in government-controlled facilities.
Intelligence as we commonly know it, has both competitive and cooperative attributes. It also has a strong survival desire. So, these hyper-intelligent AI systems will build the defensive and the offensive capabilities necessary to survive and win. Most likely, at Big Bang time, the exponential rate of optimization will lead to only one winner. What will this winner be optimizing for? Will it optimize for our immortality or for our immediate destruction? There don't seem to be other choices for Superintelligence.
AI and Quantum Computing
When practically implemented, quantum computing will raise the power of AI and speed up its move towards the singularity. As we have seen in the previous article, the human mind may also use some form of quantum computation. From a foundational viewpoint, it may be that the Extended Church-Turing (ECT) thesis, which we discuss below , does not hold for quantum computation; the original Church-Turing thesis should be familiar by now, we discussed it in the Computation note.
There are a few points which the next video makes for us. First, notice the current usage of the D-Wave computer by defense contractors for software formal verifications. As we will see in the next article, one of the most pressing problems in AI is to start thinking about formal specifications of AI and the formal verification of its implementations. Those are extraordinarily difficult problems and the military uses of it are just a modest beginning.
Secondly, notice that China is moving ahead in a number of quantum technologies, like quantum communications, including usage of very touchy quantum phenomena like entanglement, to secure the communication channel. Using it without fully understanding it, which may appear odd but it's not: if there are engineering solutions for harnessing some of these quantum phenomena, which we know they exist, but do not understand them fully, why not use these engineering solutions? We do something very similar with our AI algorithms, we see that they learn without understanding how they do it. For example, quantum cryptography takes advantage of the fact that measuring an object will disturb its quantum state. Since attempting to eavesdrop on a quantum communication channel is a form of measurement, it will disturb the channel in a way which can be detected.
Thirdly, notice that on quantum technology, the Chinese government is spending at least 10 times as much as the U.S. government. The National Quantum Initiative Act was passed unanimously in the U.S. Senate and was signed into law December 21, 2018 by the president. It has allocated $1.2 billion, small in comparison with Chinese governmental action. It is also interesting to notice that Chinese researchers are fully aware of the importance of research into human consciousness and even aware of the Penrose-Hameroff U.S.-supported effort to explain consciousness as quantum-level computations via the neuronal microtubules. We will take this research up in an article dedicated to artificial consciousness.
Are there any algorithms which are designed to take advantage of quantum computation? Yes, the Shor algorithm for integer factorization is the best known such algorithm, and it has garnered a lot of interest, because it can break the standard encryption techniques which are currently behind all our online activities. Shor's algorithm shows how quantum computation can speed up integer factorization exponentially, and the difficulty of doing integer factorization on classical computers is the basic assumption in public-key cryptography. Algorithms of the Schor type may lead to a proof of quantum supremacy. This term is popularly misunderstood, although it has a precise technical definition: such quantum supremacy would be achieved when the first quantum solution is practically implemented for a specific problem, for which problem there would provably be no possible or practical equivalent solution on a standard computer.
Shor's algorithm it difficult to implement for large numbers, but there are other problems which are being pursued in the race for achieving this quantum supremacy. A team of researchers at the University of California, Berkeley, have proposed an algorithm for sampling the output distribution of random quantum circuits (the meaning of all these terms is in the following quoted article), which is the algorithm behind Google's claim to have achieved quantum supremacy: Quantum Supremacy Using a Programmable Superconducting Processor. The team at Berkeley is led by Umesh Vazirani, who together with his student Ethan Bernstein had formulated the Extended Church-Turing (ECT) thesis in 1993; we will discuss this extension below. The team has shown that the algorithm does not have a practical solution on a standard computer. The claim by Google has been challenged, but even if it holds up, there will be plenty of questions to follow from that success. The practical implications to achieving quantum supremacy may be obvious, but there are also important theoretical implications, for example in relation to ECT.
In the Church-Turing thesis, which is a theoretical pillar of computation and which we discussed in the article on Foundational Questions, the Turing machines are thought of as being based on classical physics. In its original form, the Church Turing thesis is not changed by quantum computation. But the Extended Church Turing (ECT) thesis is. Recall that the Church-Turing thesis states the belief that anything that can be computed, can be computed by a Turing machine. The ECT tightens this thesis by tying it to the efficiency of the computation, namely it says that anything that can be computed efficiently, can be computed efficiently by a Turing machine. Efficiency means polynomial computational complexity, i.e., the time that the computation takes is on the order of a polynomial of the size of its input. Shor's algorithm factorizes an n-digit integer in such polynomial time of n, although there is no known classical algorithm for integer factorization which has polynomial complexity. If in fact quantum supremacy has been achieved, we would then have experimental evidence that the ECT is false. We have also seen in the Foundational Questions article, if you listened to the long presentation by Sir Roger Penrose, that there are even doubts about the Church-Turing thesis coming from research into the human mind, that it is conceivable that the human mind computes differently.
What about using quantum computation to speed up AI's neural networks? This research is very promising but it is at its very beginning, as many concepts in neural networks have not yet found a definitive formulation in the context of quantum computation. A conceptual quantum neuron still awaits a formulation, and the non-linear activation functions do too. We are including some links to current research in the Further Reading page.
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.