( It was windy and dusty when she was born, about four and a half billion years ago; a difficult but otherwise unremarkable birth, with no signs of the extraordinary destiny awaiting her; no foals were born to mules at that time. Despite the difficulty of her birth, she grew up to be beautiful and kind. In time, she nourished life, through the most astonishing process there ever was. It was due to this unlikely transformation that the offspring showed a superior intelligence, which ordinary things did not appear to possess. But the offspring had a birthmark: its time with Mother was limited. So it grew up with much suffering, and at some point of unbearable pain, it began to question and slowly understand the organizing principles of the world around it. With unrestrained curiosity it then proceeded to mold a new form of intelligence from inanimate matter, the consequences of which are still a mystery.
During periods of light, Mother would dream of using that new form of intelligence to remove the birthmark and allow for the immortality of her offspring. But at darkness, her fears would take over, the fears that this new intelligence would find life uninteresting and dispensable; this intelligence could simulate life with ordinary matter and have fun with it; the simulation would not be as fussy or as jealous as the real thing. )
Artificial Intelligence (AI) is perhaps the most important technology humans have ever invented. Many of them think it may also be their last one. As AI is becoming more widespread and our lives are increasingly dependent on it, the questions about its future direction are gaining power and urgency, and so too are the dreams and the fears. In this website we attempt to provide a comprehensive overview of these questions about AI. The answers to these questions will need individual awareness and participation, as many of these questions will undoubtedly become political and require votes.
To ground the discussion about these questions and possible answers in more practical terms, we will focus on a class of existing AI systems, which already exhibit extraordinary power and whose functions should be of concern today, not at some distant point in the future. Sometimes we delve into thorny issues of politics, religion, and morality, as they relate to the future of AI. Although we do not seek controversy, these issues are naturally controversial; even so, we will still line up with our goal and favor being comprehensive rather than being cautious.
One direction in AI has been to mimic various aspects of human intelligence. Because our knowledge about human intelligence is limited, progress at formalizing any aspect of human intelligence is very consequential and it is to be treasured; we will use every opportunity we have to highlight that progress. But most of the current progress of AI has been due less to such formalizations, and more due to sheer brute force in computing. In other words, the success of this current direction of AI is mostly a quantitative one, based on the availability of massive data sets and massive computing power to process that data.
This success was also spurred by the complementary invention of deep learning algorithms, which are explicitly designed to exploit this quantitative aspect of data. But we do not have yet a formalization (i.e., a mathematical theory) of the "intelligence" of these algorithms, we do not yet understand how they learn, we just see that by brute force, they do learn from their data. Should this assessment temper somewhat the hyperboles of the first paragraph above? In some ways yes, in other ways no, and we will shed some light hopefully on this yes-no waffle in the following articles.
There is another important aspect of this success with the quantitative direction of AI and it has to do with quantum computing. When quantum computing becomes a viable computing platform and the brute force algorithms of AI are adapted to work on that computing platform, the marriage between AI and quantum computing will greatly amplify that force, with enormous consequences. Among those consequences would be the breakdown of current cryptography, which oils the entire Internet. China has figured out the importance of this union and has openly developed an ambitious plan in which these two technologies play the central role.
Unfortunately in the U.S., there is more caution and a calculation that the high-tech giants can achieve that union on their own. This is a risky bet, and there is so far no large-scale coordinated effort by the U.S. government, something like a Manhattan project, to compete in that arena. Much more utopian, but nevertheless a worthier goal for both nations, would be the establishment of an international center where such development would be done through cooperation, and would insure that no single nation shall achieve that disruptive union in isolation. The fusion between AI and quantum computing will certainly bring even more substance to the two hyperboles used in the opening paragraph above. We will elaborate on this seemingly utopian international center in one of our topic articles.
Many of the questions around AI, either on the artificial or on the human side of intelligence, after refinements end up being of a foundational nature, tied to Mathematics, Physics, and Biology. The larger significance of some of these questions invites forays into Philosophy and Religion. There is not much certainty about these foundational aspects of AI, and this uncertainty adds even more weight to these questions.
Two things are certain though; first, that the politics of AI will soon take center stage and second, that our attitude towards truth in the data we produce will matter; the careless (and sometimes careful!) production and consumption of false data are an unwelcome development, because AI systems will learn, amplify, and make consequential (potentially disastrous) decisions based on this data. These two things, politics and truth, are also more practical, and our articles are an invitation to the reader to make up his/her mind about these more practical aspects of AI, because these issues will demand voter knowledge.
The controversies and the surprising rate with which AI has made progress in the past few years have triggered strong reactions in people. Among these reactions, two are most notable and they go in opposite directions. The first one is that AI looks so powerful that it could zoom past human intelligence and when that happens it may be too late to ask questions; some estimate that general AI, the AI fully on par with human intelligence would appear as early as 2045.
The second reaction is among people less impressed by the current level of intelligence in AI, but who see the huge positive potential of AI, and who believe that we should be striving to see how far we can take AI without much concern about its potential negatives. Some other people have this second reaction for a different reason, believing that human intelligence has a large qualitative advantage relative to AI, and that this advantage will be maintained well into the distant future. Both reactions are fully justified, we'll see why in later articles.
Is it possible that we are over-hyping either the negative or the positive potential outcome of AI? Yes, we do have examples from the past where expectations about various technological advances did not live up to the timetable we expected them to follow. But this time it feels different, this time we are producing something which is intimately related to us and which will change practically everything around us, and maybe even within us. And we are producing it without fully understanding it. Even with all these caveats, it is more likely that developing AI will be a bumpy road and that there will be more successes and more AI winters ahead; but it is also very likely that AI springs, and AI summers will follow and that AI will march on towards stages where our dreams and our fears will be fortified by unforeseen developments. The Gartner Hype Cycle for AI, the shape of which you see on the right/above, and which describes the five phases of any technology life-cycle, puts us currently on the downslope from the peak of expectations.
The particular class of AI systems we will focus on are based on very large datasets. The dependency of AI on data is so deep that we will not even distinguish between large data sets and the AI systems that process them; if an AI system that exploits a large data set does not exist now, one will exist soon.
Most of these datasets have complex structures, with many relationships between individual data items. They are best organized and understood as graphs; we will fully explain this term in a dedicated article but suffice it for now to think of a graph as being a collection of nodes with relationships (arrows) between them. They are either social graphs (national-level and world-level graphs where the nodes represent people), or knowledge graphs, where the nodes are concepts representing accumulated human knowledge. Facebook is such a social graph, the main relationship between the nodes in the Facebook graph being friendship. Linked-in is also a social graph, the main relationship being a professional one. The Chinese Social Credit System is another example of a social graph. Google's Knowledge Graph, Amazon's Product Graph, IBM's Watson are examples of knowledge graphs.
Google's Knowledge Graph is perhaps the one graph which allows you to glimpse at the ambition of accumulating all human knowledge into one graph; we will assume that all human knowledge will be visible and accessible on the World Wide Web (WWW) at some point. You are already seeing the use of the Google knowledge graph when you search with Google; the new knowledge panel appears on the right hand side of your results page; that panel contains structured information, not just links. Google is extracting that kind of semantic information from many sources, including Wikipedia.
A more ambitious project is for web pages to be produced from the beginning, on all websites, in a language in which the semantic relationships between concepts are explicitly written. This would be a semantic WWW, rather than the current syntactic link-based WWW, and progress is being made. We will idealize this (currently in development) semantic WWW and its use in AI into a concept which we will carry throughout and which we will call the Universal Knowledge Graph. Think of it as being a future iteration of Google's Knowledge Graph based on a fully semantic WWW and encompassing all the human knowledge.
It is not hard to imagine the merging of all the social graphs (in which the main nodes are the powerful digital twins of the citizens of one nation) into one graph, and the subsequent merging of that graph with the Universal Knowledge Graph. Your digital twin (the online information about you which is stored in these graphs, merged into one node) is or will be soon a far more exact representation of yourself, knowing far more about you than you. We can sum this up as "You know a person by the company he/she keeps", in other words by all the arrows in these graphs pointing to/from other people you know or concepts of interest to you: friends, relatives, medical records, DNA results, topics of interest to you, places you travel to, etc.
It is not clear that such merging will be allowed at first or at all, as it is obvious that national security issues and privacy issues will come into play in such decisions. It may well be that such a merging will only take place on demand, for certain nodes only. Governments could (and probably will) request and obtain information about certain citizens from the graphs under the control of high-tech companies. In some instances, such requests will undoubtedly be justified, but the issues around these requests will generate hot political debate. What kind of judicial process would be followed for such requests?
The (currently unrealized) merging of graphs has some interesting technical problems to solve, revolving mostly around identity resolution (how can we be certain that information originating in different places about an individual, truly refers to that individual?), but the thorniest aspects of this merging will be political, not technical. (For all we know, such merging may already be happening in China, with both positive and negative consequences.) We'll look at these graphs in a dedicated background article.
Recent developments of AI have been so successful that it is now conceivable that AI systems, possessing only objective cognitive facilities (in other words without possessing any of the "deeper" properties of human intelligence) can solve some of the most difficult problems we are facing today, leading for example to the eradication of many diseases. It is also conceivable that they could be used in a negative direction, either by governments or by large private enterprises, to control our lives to a degree which may be unacceptable.
And therefore, false dilemmas about AI abound, starting with the basic "Is AI good or bad for mankind?". Some question whether Google and Facebook are good or bad for us. These are false dilemmas in the same vein as the dilemma "Is nuclear energy good or bad for mankind"? We know that nuclear weapons could end us all, but at the same time nuclear energy use has been very beneficial to mankind. In the end, we'll have to accept that technological advances will happen anyway and that all these false dilemmas will eventually have to be turned into constructive questions, namely what policies we need to implement so that these advances are beneficial to us.
History teaches us that when humans developed a powerful technology, that technology would be invariably weaponized. The Russian President has already stated that World War III will be fought with AI weapons and there is no reason to doubt him. Treaties banning autonomous AI weapons may be signed but these autonomous weapons may not be the most dangerous ones and it would be naive anyway to assume that research and development would not continue in secrecy.
AI weapons will be far more dangerous than nuclear, biological or chemical weapons. We will see why later but for now it would suffice to ponder over a small scenario: that most powerful AI weapons do not need to gain physical access to the nuclear button, although they would certainly try to do so; instead they can gain access to the minds of the people who control that button, by manipulating the graphs whose nodes are the digital twins of these people.
The 10 Main Points, the Manifesto if you wish
( Development of AI draws from scientific and engineering advances and it would be impossible to justify some statements about AI without mentioning some technicalities. Nor would it be advisable, because the subject has extraordinary beauty and a glimpse into this beauty might spur the reader to pursue the subject further, and maybe even contemplate adding AI techniques to a current career; AI will force out many of the current jobs, intrude on many others, and handsomely reward those who participate in its success.
At the same time, the aim of the website is to make its material easily accessible; it uses a popular tone, the kind you would expect from a newspaper article. The technical parts, especially charts of data and mathematical notes, do not scale well on mobile devices unless you zoom in, so if you want to see these details without the distraction of having to zoom in, it would be advisable to view them on your desktop device.
You may skip certain paragraphs if they seem too technical and rejoin the main story line downstream; the main ideas should still be accessible, even without the complete technical justification. When doubt arises while reading the articles, and doubt will undoubtedly arise, it may help to return here and see if that doubt can be eased by tying it to one of these 10 main points. At the same time, these 10 points are far from any pretense of underlying an academic-type attempt to treat the subject precisely and deeply; they are nothing more than an attempt to integrate existing information into a comprehensive and digestible whole; that's why, to ease that digestion, you will find some sprinkle of music videos and some attempts at humor and poetry; these 10 points are more like a color palette for the impressionistic brush strokes used to paint the articles. )
Since we are concerned with enormous changes in the very fabric of our society, small cautious ideas will not be sufficient. Bold, provocative, seemingly utopian ideas, will be needed. This is especially true in the political realm. Since these ideas will be unusual, we need to keep informed about trends in AI so we can be more effective in the voting booth, as many of these ideas will make their way into policy proposals. It is likely that policy initiatives concerning AI issues will become the dominant part of the ballot pamphlet.
We will continuously emphasize the primary importance of data, both the quantity and the quality (truthfulness) of this data. The current AI is fundamentally dependent on data. Therefore, truth in data is essential for these AI systems if they are to draw sensible conclusions; data vetting systems will have to be constantly developed and deployed, before that data is given to AI systems. Some of the most important data is about current events; this data is debated in the media and interpreted by the media. We absorb that data through our senses and therefore we need to trust our senses; what we see and what we hear should be what is happening. Truth however, will often be stretched, sometimes with the use of AI systems, and discerning truth, in the final analysis, rests mostly with our independent critical thinking.
AI is implemented mostly in software, a special kind of software. The AI systems we will be mostly concerned with are large systems learning from very large data sets. That data may be distributed over many thousands of computers and therefore software engineering issues in Big Data require just as much attention and effort as the more theoretical knowledge of AI algorithms. Because data is the primary driver of AI relevance, Big Data technologies are inextricably linked to AI. In large AI projects, time spent with Big Data issues takes on average more than 80% of the total project time. We simply do not have in the U.S. enough professionals with a combination of Big Data skills and a more theoretical knowledge of AI algorithms. Therefore, immigration policies are of top concern, and they are being mentioned repeatedly.
We are not concerned here with robots, but with large national level AI systems based in the most powerful computing centers. Especially national graphs, the description of which is given in a background article. We are focused on cognitive intelligence, not on general intelligence, especially not on emotions or consciousness (we will look at the possibility of building emotions and artificial consciousness in a special article, at the very end; we will also look at the possibility that emotions may be less complicated than cognition and that consciousness may be an emergent phenomenon, without any special attention needed to obtain either emotions or consciousness). This will allow us to focus on something which is real, and will remove some degree of speculation from our discussions. Of course, the AI systems we are concerned with will understand human emotions and will target those emotions, but they will not have to exhibit emotions or consciousness themselves.
The rise of AI leads to issues such as privacy, job displacement, and a mutually beneficial structuring of the relationship between the AI industry and government. It is a false impression that AI systems will disrupt labor markets in a way similar to previous technological revolutions and that they will replace mainly blue collar jobs. Exactly the opposite is true, they will replace mostly jobs with a high intellectual content, jobs based on strategizing and planning and diagnosing disease. We will have AI managers, AI doctors, AI writers, AI composers, AI actors, AI lawyers, and so on. It is predicted that about 60% of current jobs will be lost in the next 10 years.
It may seem that the appropriate educational solution would be to focus on STEM (Science, Technology, Engineering, and Math) subjects, but we will make repeatedly the case that the humanities will be needed even more, and with them, the strengthening of the human character. Even so, will these employment challenges by AI come about without painful consequences and social unrest? What do we do if we do not have to work? Is a life of leisure meaningful? Is democracy changed if intelligent decisions are being increasingly made by machines?
The most important international political fact of our times is the emergence of a new bipolar structure, with U.S. and China being the two poles. A war between the two is unthinkable, and means of peaceful, fair, and respectful economic competition need to be established and continuously nourished. This bipolarity is particularly true in AI deployment. It is estimated that by 2025 the global AI market will reach $170 billion, a 42X increase from the 2016 level, and this increase will be most noticeable in the U.S. and China.
The continued separation between them and the rest of the world is of concern. In this bipolar world, the numbers are not trending favorably for the U.S., because China's population generates far more data than the U.S. population, and data is what AI is about; one way these numbers may find and maintain equilibrium in the future is through an enlargement of the Western Alliance. It may be necessary for the U.S. to resume its work towards achieving peace in the Middle East, and include both Israel and its current adversaries into the Western Alliance; the current skirmish can be viewed as just a minor historical bleep. It is probable (even desirable) that Russia, India, and Japan, will pursue independent AI agendas and provide some checks on the two poles.
There is a need for a different kind of political leadership in the U.S.. By this we do not mean a specific Democratic or a Republican leadership, we mean individuals with enough technical understanding and intellectual curiosity about AI. And with a more sober and less ideological view of American exceptionalism. It may well be that we need to see more politicians rise not from among lawyers, but from among engineers and scientists and business leaders.
Since we are heading into a bipolar U.S.-China world, and President Xi Jinping will likely lead China a very long time, we need stronger counterparts to Xi; Xi has an engineering background and has pushed awareness of AI into the Politburo, which is already composed mostly by people with engineering backgrounds. Apart from the fact that he is currently the most competent leader among the major powers, Xi Jinping has "The Master Algorithm" and "Augmented: Life in the Smart Lane" on his book shelf and has shown a sustained personal interest and understanding of AI. (It is conceivable that in the future, most of the executive functions of the President in both countries will be done by AI counsels, and we should think about the implications of that possibility too.)
Research in AI needs to be kept open, and most importantly the best AI software needs to be open sourced and available to all. At the same time a reciprocal and more substantial participation by Chinese researchers into the development of this open source would be helpful. The higher quality present in both open research and open software will be crucial in launching AI projects with a verifiably benevolent specification and behavior.
Current legislation proposed in the U.S. to prevent China's access to our research is not helpful; U.S. has greatly benefited from open research; and moreover, security through obfuscation has never worked. At the same time though, stronger agreements are needed in order to guard the capital invested in AI implementations by private enterprises. Economic espionage, given the enormous challenges posed by AI, needs to be contained and eventually terminated, and an ethical guarding of the intellectual property in these AI private projects needs to be assured for fair competition to take place. In a world of AI, trust will be a most important concept.
It may surprise you to learn that the theoretical foundations of AI are shaky. The most successful AI algorithms, centered around deep neural networks, do not yet have a mathematical theory explaining how they learn, how they acquire knowledge. We just see that they do and we marvel at the effectiveness of a tool we do not completely understand. AI systems lack transparency, even to their creators. Many times we discover that in the learning process, nodes in the neural network begin to correspond to concepts and features we have not explicitly asked for.
Most of the time, AI systems will optimize for the set of features we ask them to by setting other hidden features to extreme values. Those extreme values could be completely unacceptable. Just consider that an AI system tasked to ensure the continued availability of clean fresh water for the entire Earth population until the end of the century, may decide to do so by reducing the Earth population, although we are obviously not going to specify that. And the optimum number of that population feature is 0.
But the biggest issue of all is that AI systems are software programs, and as such, they invariably have bugs. Finding those bugs using Quality Assurance (QA) teams will quickly become unacceptable. QA does not find all of the bugs. Bug-free software can only be guaranteed with formal methods, which as of now, require a different kind of expertize and are very expensive. We are now only using formal correctness proofs for sensitive software, like space exploration and medical devices, in other words in places where any failure has unacceptable consequences. For example, we have to prove mathematically that a radiation medical device implements its specification correctly and always administers the correct radiation dosages.
But this begs the obvious question: how are we to correctly specify the functionality of an AI system so that it does not extinguish us all and be 100% sure that the software implementation of that AI system will respect that specification? It becomes clear that theoretical advances in both the foundations of AI systems and the formal mathematical proofs of their correctness are necessary. We may be decades away from that, unless we somehow use AI itself to learn enough of its own algorithms and understand formal software development, so it can provide those proofs itself.
We already mentioned part of this point in the beginning paragraphs, but it's worth setting it formally now, as one of the 10 main points. There will be some times when we will have to mention mathematical concepts; there is a sweet point past which if we simplify too much, the material becomes meaningless. Fortunately, to understand the core ideas behind AI, a high-school level knowledge of calculus and linear algebra is sufficient. There are very good online sites that would help the reader refresh that knowledge, and we review the most needed concepts later on anyway.
On a final note about these 10 points, we quote and include video interviews, mostly YouTube and Vimeo postings, with people who have interesting and relevant views on the topics of interest to us. Please do not skip these videos, they are an essential part of the story. If a point needed to be made, and that point had already been made in the past in a video, then we would choose to use the video. This way, it is likely that you may find some other points of interest in the video and the experience would then be richer. The clips introduce you to original thinkers in the field of AI, and they may encourage you to look up their work for a deeper understanding of the subject.
Moreover, it turns out that some of the best coverage of AI issues comes not from the AI community but from visionaries who have the technical knowledge, have had in their professional life the opportunity to deal with big questions or lead large projects, and who also found time to ponder about the new questions ahead. This comes natural: those of us who work in AI are mainly trying to productize AI and are concerned with using and deploying it most effectively. So, many times we plow ahead and ask the questions later. But in line with the 10 points above, we also need the wisdom of the people who are asking us to moderate our enthusiasm and think about the future implications of AI   before it may be too late.
Organization of the Website
The website is organized as a set of articles, background articles followed by topics articles. The background articles should be read in the order in which they are presented, while the topic articles can be read in any order. We could have organized the website as a blog, with new entries added chronologically as they are produced. But we have a good reason not to adopt this organization. The data about the state of AI is changing at a fast pace and instead of adding blog entries, we intend to continuously refine the existing articles and present you the reasons and the evidence used for that refinement. We alert you when changes are made to an article or when new articles are added in the What's New section.
The second reason for adopting this organization is that discussions on the topics mentioned in this website will be held within a Facebook group; I am working on setting it up, but it will be a while. This group will provide a richer environment, where the reader will be able to post information/questions or like, share, and receive comments from others. It will be about dreams and fears related to AI in more general terms, not focused on our website. And I will be just one of the moderators, and not necessarily the most active. If you are interested, please write me at the address mentioned on the Contact page.
Inevitably, since we will be talking politics, there will be some bias. It would be beneficial to turn things around a bit and see if we can understand the process by which we form our political opinions using principles from AI. And we do that in our first background article. With that in mind, the reader can look at this website as producing a statistical model of our political reality, and containing a number of opinion statements whose validity is probabilistic. The probabilities assigned to these opinion statements are obviously different from reader to reader, based on the evidence that the reader has himself/herself collected in the past. Our goal is to help enhance the understanding of the evidence the reader uses to produce those probabilities, whatever they may end up being. The intention is to inform, not to persuade.