AI And Human Intelligence


By Adrian Zidaritz

Original: 02/01/20
Revised: no

The human brain is still the highest form of intelligence we know and therefore it offers a model and a reference for AI. In the other direction, AI tools are increasingly in use by neuroscientists and even neurologists, for example in functional MRI. But these two forms of intelligence are still very different, and while knowing one may help understanding the other, we can and undoubtedly will make progress in AI, without having to imitate the human brain in all its aspects. And one of our main themes is that a class of such AI systems (which we will define later in the article as Artificial Wide Intelligence (AWI) systems) could reach the ultimate Artificial Super Intelligence (ASI) level completely bypassing all the stages of general human intelligence.






Open the Pod bay doors, HAL!


The possibility of AI zooming past human-level intelligence is of concern of course, although the estimates of when that might happen seem a bit premature. Nothing symbolizes the concern with AI escaping the control by humans better than the old scene from 2001 Space Odyssey. And today HAL is in our living room but we call him Alexa. For a good laugh, ask Alexa (or whatever personal assistant you are using: Siri or Cortana or Google) to do the same thing as Dave is asking HAL, namely: "Alexa, open the Pod bay doors!". Try it!






How special is the human brain?


In our previous background article, we have looked at the benefits of simplifying and considering the human brain as a computational machine in general and more specifically, a computational machine that uses Bayesian inference to form a model of reality. On one hand, if we subscribe to the idea that the brain is a computer, then by the principle of computational equivalence, which we'll examine later, the human brain is subject to the same limits that any other sufficiently complex (technically, irreducible) computation is. On the other hand, there are so many aspects of this very complex object, arguably the most complex object in the universe, that we do not understand at all. And reducing everything to normal computations, in the absence of this understanding, does not seem to be the only path. But before we look at more complex explanations for brain functionality, let's pause and marvel at the extraordinary flexibility and surprising behavior of this amazingly complex object.




The interaction between all these elements is already present in the diagram we used for the Free Energy Principle. Imagine now a boxy machine (the size of your dishwasher) that has a camera and an arm, and anytime a cat wonders by, the camera will recognize it's a cat, and push out an arm with some tasty cheese on it (that's all, the cat will get the cheese and wonder off). This will allow us to think of machine intelligence in the same terms as in the diagram of the Free Energy Principle, which we saw in the previous article; the camera houses the sensory states and the arm houses the action states (together they are the Markov blanket of the system). The model=knowledge, housed inside the box, will continue to learn some new cats and improve as it goes.

There are many theories of the mind and we will have to make some choices. In particular, we will postpone dealing with questions regarding consciousness. Consciousness is a very interesting but a much harder subject, and we leave it for the very end. Because our main interest is in current AI, and Artificial Consciousness is basically unreachable with our technology, we will be focusing on cognition and perception, these being the areas of human intelligence modeled by AI at the present time.

Given our thesis that cognitive facilities are the most important, we should note that in humans these facilities, housed in the cortex, took thousands of years to develop, and now the cortex is the largest portion of the brain. The video below mentions the neocortex, technically only a part of the cortex, but its dominant part nevertheless. In everyday language, the two terms are used interchangeably. (As we will formulate this idea clearer in the next section, AI systems with only cognitive abilities, no emotions and no consciousness, could achieve levels of intelligence surpassing human level in all its more diverse and subtle aspects. So the focus on understanding the structure and the functionality of the cortex, as a way to learn and borrow from its design for AI purposes, is well justified.)






Types of AI


There are three types of AI, the definitions of which have achieved wide acceptance. Artificial Narrow Intelligence (ANI) is the AI that is designed for single narrow tasks, it is the AI we have today. For example, recognizing images of people, attaching captions to photos, translating text between English and French, playing games, making purchase recommendations ... these are all ANI systems.

Artificial General Intelligence (AGI) is the AI of a system that is capable of accomplishing any task a human could; it is the AI that functions at the same level as humans do, but not higher. AGI does not exist today and the estimates on when we will have it vary greatly.

Artificial Super Intelligence (ASI) is the AI that has surpassed human intelligence and it is able to improve itself without human assistance. It is the form of intelligence that has generated most hyperbole and concern, since it has the potential to reach levels of intelligence unimaginable to us. For some, when and if ASI will appear are controversial speculations. Nevertheless, ASI is clearly bound to deeper questions, some with religious overtones; we will treat ASI in one of the latest articles.

The 4th type, Artificial Wide Intelligence (AWI), shown in red below, is not standard terminology. But it happens to be the one we are most interested in. We will describe it further down, and refer back to this picture.


The 3+1 Types of AI


Practically, the most difficult one to pin down at this moment in time is AGI. The evolution of a particular system from AGI to ASI may be out of our control, and most likely it will be. Will AGI incorporate emotions (or imagination/creativity/self-awareness)? Many argue that it MUST incorporate emotions and it must possess some form of morality, i.e., some rules of "good" and "bad" behavior, which morality would ensure an AI whose goals are benevolent towards humans. Others opine that imagination/creativity/self-awareness are emergent (results of evolution), in other words a system with high-enough complexity will at some point develop these characteristics on its own, the result of some kind of non-biological evolution.

What about emotions, can emotions be emergent? To even give an AI system the chance for developing emotions, we would have to develop artificially some sort of limbic system for it, which would transform sensory information with which it interacts with its environment. You can quickly see that these questions about emotions lead inevitably to talking about embodied cognition first, because to have emotions AI needs to also have sensory inputs and outputs to trigger the emotional responses; at least vision, hearing, and speech, but maybe not motor ability. So very quickly this leads us to some of form of robots, and robots are not our main concern. But before we leave the subject entirely, let's just pause for a bit and see that work is done in this area as well, and that, just we already touched on before, emotional intelligence can be approached the same way as cognitive intelligence, and moreover that the robots are endowed with the capacity to recognize human emotions too:




We are interested mainly in AI systems without a full mechanical body; the body will be restricted to visual organs and auditory organs; an AI that will see and hear, but not touch, smell and taste. Anything related to biological functions like pain, hunger, thirst, will be outside of our interest. The main reason is that the cognitive-only systems we are interested in already exist, and policies about them can be discussed as well as the truthfulness of the data we use to train those systems.

The three types of AI discussed so far are shown in the diagram in blue. But the AI systems we are mostly interested in our website do not fit easily in this scheme. We called such systems Artificial Wide Intelligence (AWI). In the diagram above, the AWI systems are shown in red. At the core of an AWI system is a network of people and concepts. We will use the more precise term of graph instead of network and explain these graphs and the AWI systems around them fully in the next article Graphs of Data. An AWI system can perform a wide array of unrelated tasks on its graph, some of which may not even be suitable for humans. AWI systems reside in very large data centers or in supercomputers and they have a unique set of challenges. They are stronger than ANI but not even comparable to AGI.

This distinction has some advantages for us. It makes more precise the kind of AI we are interested in and allows us to focus on concerns related to AWI. Our thesis is that AI does not have to evolve in relationship with human intelligence and that superhuman intelligence (ASI) may possibly be reached accidentally while completely bypassing the AGI stage. Until we cover AWI in the next article, here is a short clip showing the acute awareness of AI's growing importance:






Cognition versus Emotions and Consciousness


Full human intelligence is complicated and consists of many abilities, which a machine does not need to posses in order to be more effective than humans at many cognitive tasks. As we mentioned earlier, we only consider objective cognitive facilities and argue that those abilities are enough to lead to systems that exceed not the entire spectrum of the human intelligence, but enough to achieve results that surpass the human intelligence in its cognitive abilities. For example, when we measure the IQ of a person, we measure the cognitive intelligence, with an IQ of around 160 meaning a vastly superior intelligence, and with 200 being the agreed theoretical upper limit for IQ. So, a supercognitive AI system would be a system with an IQ larger than 200. We cut off the following video right before it goes into robotics, but you may want to listen to the entire presentation by following it on YouTube. Please make a note of Marvin Minsky's evaluation of emotional intelligence as being less complicated than cognitive intelligence (which is NOT the popular position, but which he articulates very well) because we will return to this idea later.




Although emotions may be less complicated than cognition, consciousness is not, in fact it appears to be the most complicated aspect of the human brain; it does not appear to be a form of standard computational intelligence (of the sort our computers use); it may be a form of computational intelligence but one that employs different methods of computation. Anil Seth ties together some of the concepts we have mentioned, namely the brain as an engine of active inference.






Is the human brain a quantum computer?


We do not know yet the full nature of the computations taking place in the human brain. These computations certainly seem much more complicated than the computations taking place in our standard computers, which are based on binary-state transistors. There is one other form of computation which is being intensely researched in the world of today, namely quantum computation. It is possible that the human brain uses such a computation model, we simply do not know the answer. But let's assume that it is. We do not yet have completely functioning quantum computers and we may be many years away from that. If you look at the lab rooms where quantum computing takes place, you will see that we are still at the very beginning. Lots of equipment and pipes, because quantum computation in the lab needs very low temperatures.

Could it be that the human brain, warm and wet, has found a way to bypass that temperature requirement? Maybe. But anyway, it could be very well that even if we develop quantum computers, they will still not rival the power and sophistication of the human brain for a very long time. Does this mitigate the risk from AI? No, it is quite possible that powerful AI does not need quantum computing, making use instead of its quantitative data edge over humans. And it could also be that, in its evolution, it will be able to figure out how to build and use quantum computing devices before humans do. Roger Penrose needs no introduction, he is one of the most prominent scientists of our time and he is proposing that the human brain is a quantum computer. Here is a reference to his work in a more familiar environment, discussed by dr. Stuart Hameroff, an anesthesiologist who is collaborating with Penrose in that direction of research.






Neuroscience and AI come together


We have already seen that the human brain offers an example of intelligence, and more than that, it is the primary reference for AI researchers. DeepMind has put that tapping of research of the human brain into a corporate mission. DeepMind neuroscientists work side by side with AI researchers, and the results of that cooperation speak for themselves. DeepMind research is among the most quoted research in AI, and its development of Alpha Go, the program that beat the reigning Go champion, is the most spectacular success of AI so far. The technique used in that program, known as Deep Reinforcement Learning, is the main focus of our Main AI Concepts article .




Going in the other direction, is AI helpful for neuroscientists? Yes, for example in functional MRI, the use of MRI in a new way, correlating human activity with the parts of the brain where that activity manifests itself the most. By putting together various fMRFI studies, a pretty revealing sort of brain fingerprint can be built. Which fingerprint can predict some surprising behavior. The inventor of fMRI is Karl Friston, whose work we have already mentioned when we introduced active inference, in the article How We Form Political Opinions. AI models can be built by analyzing the fMRIs of patients to predict what it is that the patient is thinking/dreaming at the time of the scan. This is mind reading at its most practical, and the results of this mind reading can be summed up into a matrix of numbers which is your brain fingerprint, and which identifies you uniquely and in much more revealing ways than your standard finger print. Those numbers in the matrix correlate with likes and dislikes and behavior. Medical diagnosis can be made based on these numbers. And if you sleep in the MRI scanner, your dreams can be decoded, again using these AI techniques. Would we be able someday to deduce from your brain fingerprint that you are a Democrat or a Republican or an Independent? You may not want to waste time arguing about presidential politics with the MRI technician before the test starts, he may soon get to know your positions anyway!




fMRI needs to have the patient spend substantial time in the scanner. It is not the only way to produce images of the brain, which AI would be then used to enhance. Computed Tomography is used very successfully too, and in particular SPECT (Single Photon Emission Computed Tomography) scans. SPECT scans measure the activity in various parts of the brain, by measuring the blood flows, which are strongly correlated with activity. Enhanced deep learning techniques of AI are being used to analyze SPECT scans, eliminate noise, enhance quality and lead to a faster image reconstruction. These enhanced images give doctors (in this case psychiatrists) a much better picture of a patient's situation than the standard psychiatric interview.




The matrix of numbers above (the brain biometric if you wish) is not the only biometric which stores an almost complete set of characteristics precisely identifying an individual, and which could be used for medical diagnosis or for psychological evaluation or for targeting (marketing or malicious). Another biometric can be also extracted from the human voice. We will make the assumption at the various points where we need it that all these biometrics, as well as the entire AI deduction following from them, is part of the digital twin in the graph of interest to us. Here is how far the voice biometric has come:






Hybrid Intelligence


A conceptual distinction has emerged recently between two types of thinking, one which is used to solve problems with logic and processing power, and another one which involves creativity, some way to attack problems using analogies or context, or generalizations, alternatives, etc. The two types are now formally known as convergent thinking and divergent thinking, and they are mostly used to distinguish between how computers (i.e., AI) think and how humans think. The natural question that arises immediately from this distinction is how to find ways for the two types to complement each other. But before we follow up on this hybrid (=augmented) intelligence, we pause and appreciate this concept of divergent thinking (creativity) and the fact, which will appear a few times later, that creativity can be present in machines too.




Here we will look at Neuralink, the project to link the human cortex with the digital world of AI. The point that Elon Musk makes is that we already are linked to our digital extension (phone/computer/laptop, etc.) but the input/output of this link is very slow, extremely slow especially when we only use our thumbs to type into our phone. So he envisages (and finances!) an effort to boost up the bandwidth between our cortex and this AI extension of ours, called the Neuralink. But the conversation touches many other aspects of interest to us and it's probably better to listen to it in its entirety because it has a beautiful flow to it. And the sincerity of both Rogan and Musk is very charming and believable. A most thought-provoking and condensing statement is that "we are the biological boot-loader for AI". (A boot-loader is a small piece of software on your computer/laptop/phone that kicks in after you turn your machine on and loads the operating system from disk into the memory.)




The human brain is the most complex structure in the entire universe; there may be other brains out there, but we have not yet observed them. There is a significant worldwide push, backed by considerable amounts of money, to make progress in understanding this complex structure of the human brain. Billions of dollars are funding Big Science projects focused on the human brain. In 2013, the EU has launched the Human Brain Project (HBP). In the same year, the Obama Administration launched the BRAIN initiative. In 2014 Japan began work on Brain/MINDS (MINDS is the abbreviation for Mapping by Integrated Neurotechnologies for Disease Studies). In March of 2016 China launched its China Brain Project. There have been successes and failures, and that mixed combination will undoubtedly continue.

If you followed our main argument about the current success of AI being due to learning by sheer brute force applied to massive amounts data, you probably wondered how is it possible that children seem to so easily learn from much smaller sets of data. This question is a big one, and it seems to indicate capabilities of the human brain that we do not grasp yet, and therefore cannot yet incorporate similar capabilities into AI. But that does not mean that efforts are not being made in that direction of research. Secondly, efforts are being made to develop computing architectures able to exhibits elements of divergent thinking, by incorporating uncertainty and ambiguity in their design. For example, a new computing architecture, called Neuromorphic Computing, implements a mixture of analog and digital circuitry to mimic the functionality of biological neural networks; these new artificial neural networks are called Spiking Neural Networks, and they try to capture a notion of time, which the biological ones have; we had a glimpse of them in the Neural Network 3D Simulation. Here is a presentation by Brian Krzanich, former Intel CEO, of Intel's neuromorphic research chip (it also includes a presentation of Intel's 49-qubit quantum processor):






... Still, could there be something more to it?


Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people.
- Carl Sagan