General Concerns about AI


By Adrian Zidaritz

Original: 02/09/20
Revised: no

We have already mentioned in the home page that AI sometimes appears over-hyped and that maybe the concerns surrounding AI are overblown. The problem with making these kinds of general arguments about AI is that they lump together very distinct types of concerns. We will now separate these concerns into two disjoint sets, and we will refer to the two sets of concerns as Type 1 concerns and Type 2 concerns. Raising awareness about all these concerns is what we hope to achieve here, especially in the current political climate in the U.S. which is not conducive to finding constructive solutions to these concerns.

Type 1 concerns are concerns which are immediate and not speculative. The issues of job losses due to AI, rising income inequality due to AI, immigration and AI skills, consumer identity and privacy, competition with China in AI, the relationship between government and the AI industry, AI weapons, are clearly not overblown and they cannot be overblown, because we experience them now, not in a distant future. These issues will need our immediate attention, they have strong political implications, and hopefully they will be dealt with through sensible legislation.

Type 2 concerns are more futuristic, and involve some degree of speculation. They are nevertheless just as important, and arguably even more important for our civilization, than the short-term concerns of Type 1. When you hear that AI concerns may be overblown, it is usually these Type 2 issues that are being talked about. The two main Type 2 issues are the issues of control of AI by humans and the morality of AI relative to human morality. We are speculating, for good reasons, that AI may grow to a level comparable with human intelligence and that when that parity is achieved, an intelligence explosion will occur, and AI will zoom right ahead of human intelligence. This possibility leads to our justified worry: will we still be able to control AI and what kind of morality do we have to instill in it so that it will continue to benefit us, even if we lose control over it?

There is however one concern which is not so apparent in today's media coverage, but which should rise to the top of the list: the fundamental ways AI will alter the world economies (You will be forgiven if you decide to jokingly refer to it as the one concern of Type 1.5). The economic mechanisms used today will most likely be squeezed into new forms which are unimaginable yet; the two dominant "-isms" of today, capitalism and socialism, will most likely cease to exist and be replaced by a more flexible, AI-based, technocratic, algorithmic control of supply and demand. Today's debate between economists of whether at this point in time we should be boosting supply or demand will not belong to humans in the very near future. It's simple why that is, because the decision should be based on data (even when the decision is still made by humans, hopefully!). AI, with its ability to process massive amounts of data will be able to answer the binary supply/demand question at any time. If you recall the concept of formal decidability from the Foundational Questions article, the demand/supply question is computationally decidable. Moreover, there will be a finite data center, not a limitless theoretical Turing machine, which will decide it.

In the AI Singularities and Quantum Computing article, we saw the paradigm shift which will happen when AI runs on quantum computers (it's safe to assume that it will happen, but we still do not know how). There is a third technology which will impact our use of AI, blockchain, which we will discuss in the article Identity and Trust, because blockchain is the way to establish trust on the Internet. One application running on blockchain are the so-called smart contracts, namely software programs stored on blocks of the chain, which execute only when their conditions have been met by previous blocks or by their processing of external data. These smart contracts could contain the software necessary to fully implement the Federal Reserve, the U.S. central bank, and Modern Monetary Theory would be eminently implementable by AI on blockchain.

This central concern about economies has obvious connections with both Type 1 and Type 2 concerns. Job losses and increased income inequality due to AI are clearly connected to it. But morality is too, and the connection may be deeper than the others. How we steer our economies through the AI maze will be dependent on AI's morality, which we should be careful to formally specify. At any point in time, how much should AI push the wealth redistribution needle up or down? That question has moral dependencies, how much should it care about human pain and suffering when setting that needle? What should the Universal Basic Income (UBI) level be at any given point in time? (hopefully you will have realized by now that with AI alongside, such a UBI thing will be absolutely necessary if we are to avoid pitchforks in the streets.) With current world resources, should it provide healthcare to all? Should the increase in the world population be stopped? Perhaps the best way to frame this central concern with the effect AI will have on world economies is by watching the following video clip showing the GDP of nations through history. Pay particular attention to where China, U.S., and India, are projected to be, and keep in mind that AI is all about data and that population levels in these nations are important, because data is generated by people. After you watch the video, you will see why we chose in this first topic article to focus on the Type 1 concerns about national security and immigration.




It's safe to say that most of us would like to see a good outcome to the development of AI. It's also safe to say that a good outcome would be an AI which remains a tool under our control and which enhances our individual well being and the well-being of those we care about (for some of us this includes everyone on the blue dot). Now, words like good and well-being and care do not have very precise meanings, and this lack of precision is even more clear when we move across cultures and national boundaries. Given this hurdle, can we hope to agree on an agenda worldwide? It's likely that few of us will work on such an agenda, but it is also more likely that most of us will work on national agendas for AI, at least in the foreseeable future.



Critical Thinking in the Age of AI


It seems obvious that the least we can do when faced with the enormity of these challenges from AI is to exercise a critical thinking. It is true that our current President is a uniquely polarizing phenomenon in U.S. politics, but his rise has also brought about some unexpected common reactions from people. Our critical thinking has been stirred up. We were awakened out of complacency. It matters less if your thinking is Democratic or Republican or Independent or Blue-with-Green-Stripes Supremacist. What matters more in our search for a good AI outcome is that we are now stirred to think, one way or another.

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.
- Hannah Arendt, The Origins of Totalitarianism

Hannah Arendt, one of the most influential political theorists of all times, proposes the fundamental idea that critical thinking builds the best defense against political manipulation. Since according to Arendt everything around us is political, AI issues will be no exception. We will return to Hannah Arendt's ideas in the article AI and the Deliberate Faking of Data, because that's where we can use her ideas to bind together the individual critical thinking with truth in data. Most of the videos about her extraordinary work are in German, but here is a short trailer in English about Hannah Arendt and the danger of thoughtlessness:




We need a bit of a conceptual framework to continue, otherwise the discussion will be harder to follow. What are some of the things we can do, besides exercising critical thinking, in order to increase the odds of a good AI outcome? We will focus on a quartet of concepts which will provide us with a framework for our discussions; we will call them the X-Quartet. We will carry the X-Quartet in our minds and refer to it multiple times in all of the following articles. Don't think of this quartet of concepts as something particularly deep, it's just a vehicle for us to anchor our discussions.


$$X-Quartet = (critical \space thinking, \\ character \space fortification, \\ truth \space in \space data, \\ formally \space specified \space morality)$$

In the X-quartet, character fortification will mean human character fortification. The X of the X-Quartet is meant as a placeholder for whatever you, the reader, should conclude that this quartet represents for you. It brings in a bit of humbleness and the hope that the X will constantly invite you to question the validity of and the assumptions around the Quartet. It could mean "guide for a better AI outcome"; in other words, with critical thinking, a fortified character, a careful curation of data, and a formally specified morality, we increase the odds for a good AI outcome. The first 3 terms are clear, and the formally specified morality will be explained in the article on Superintelligence and God. But we'll breach the concern about morality now, as one of our concerns about AI.



AI Morality Versus Human Morality


The terms morality and ethics are usually interchangeable. But since we aim for some form of formal specification of AI eventually, we will adopt the distinction made in academic and legal settings. Morality refers to an individual's judgments of right and wrong, while ethics refer to rules of conduct specified by organized communities (businesses, government, etc.). So when we talk about AI, should we refer to AI morality or AI ethics? We will look at morality as being descriptive (a description of what an individual believes) and ethics being prescriptive (how an organized community should conduct its activities), and will prefer to look for the axioms of morality to teach AI, reasoning that for AI, based on many other points of data from which AI learns, its ethics will emerge. (There is a deeper reason why we prioritize morality over ethics. There is a problem with setting too narrow goals for AI, the problem being that AI will too faithfully and literally optimize for those narrow goals. We will see later, in the article Superintelligence and God, that we should give AI some slack and let it determine based on data, the intentions behind our goals, rather than a faithful (and many times dangerous) conformity to these narrow goals. With that slack in mind, we will attempt to formally describe what we mean by right and wrong, rather than specify the rules by which AI should conduct its business.)



The following proposition seems to me in a high degree probable -- namely, that any animal whatever, endowed with well-marked social instincts, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well developed, or nearly as well developed, as in man.
- Charles Darwin, in "The Descent of Man, and Selection in Relation to Sex"

Darwin states that the moral sense that humans possess is due, in addition to the existence of social instincts, to higher intellectual power (as opposed to other animals). If we were to apply this to AGI, the AI on par with human intelligence, then AGI only lacks the social instincts in order to develop a moral sense. It is difficult to imagine how AI could develop social instincts, so it follows that it is difficult to envisage how AGI could acquire a moral sense simply as a result of an evolutionary process. Since we cannot rely on such an evolutionary process, we are then left with the task of having to (programmatically) specify moral behavior when we design AI systems.




We will have to say more about the morality of AI in the Superintelligence and God article. Suffice it for now to say that we do not know yet how to specify a morally correct AI, or even what that moral correctness means. Nevertheless, we will take the plunge and think about it in the above mentioned article. We have seen in the previous article that one way to begin thinking about the future of AI is to bring the focus back to humans and beef up human values and human resilience, in anticipation of deep future uncertainty. So it quickly becomes apparent that when it comes to morality, the best we can do at the present time is to ensure that humans who have decision power over the creation and use of these AI systems have a high moral sense. And one practical way is to promote a moral leadership in the 3 branches of our government, in the hope that this moral leadership will establish the larger context for the technical specifications of emerging benevolent AI systems. The following discussion breaches the subject for us, and brings in a sense of urgency:




Apparently, truth and morality are not a concern of the current Administration, but that lack of concern in itself has raised the issue of truth and morality to higher levels in our national discourse. We grew accustomed in the U.S. to the idea that we are building the most ethical and moral society that is practically possible on the blue dot. And also accustomed to the idea that truth, ethics, and morality are as American as pie. We found out that this is not necessarily so, and that discovery, and the attention we have to give to that discovery, is another reason why our X-Quartet is on a healthy trajectory.

We already understand that our phones have become an extension of ourselves and these phones are using AI at an increasing pace. With these devices we connect and exchange data with others less directly and increasingly indirectly, through our digital twins in social graphs, and potentially national graphs. As we saw in the Graphs of Data background article, these graphs are also using AI at an increasing pace. So nothing seems to be out of AI's reach: our jobs are impacted, our health, and in general our entire well-being. Faced with an enormous amount of information, which exceeds our human capacity to absorb, questions of morality (and the sorting of the good from the bad within that information) need new answers. Perhaps an easy way to visualize this impact on human morality is through the following simple picture, to which we will refer a few times:


AI Stretches Human (H) Moral Boundaries


We see this stretch of values in the polarization of politics and even family relationships, to which AI is contributing more and more, the growth of wealth and poverty at the same time. In all areas involving human morality, AI will continue to stretch us. Assuming that we want that stretch to be stronger towards a good outcome, we will have to teach AI what "good" means. And this will force us to make very precise our understanding of morality, so precise in fact that we can write it down in a formal language that we use to program AI. The formal specification of an agreed-upon morality seems to be the core prerequisite to a beneficial outcome of AI, and you can hopefully see a progression towards that goal in the flow of the articles on the website. There is another thread weaved through the entire website, that we should teach AI to place more value on commonality between humans and their organizations, and less value on differences. This will apply to the political world, as we do in this article, as well as the religious world, which we will tackle in the Superintelligence and God article.



National Security Depends on our AI


We have seen the concern with the fundamental ways AI will alter the world economies. It would be admirable if nations of the world could find a common ground on how to steer AI for the good economic benefit of all. But this cooperation is unlikely to start soon and meantime each nation will be concerned with its national security in an AI-driven economy. In general, 82% of the U.S. population is somewhat concerned with AI issues and the sentiment is more towards the dangers of AI than its potential benefits. If we could pinpoint a moment when a critical shift in awareness has occurred, similar with the Sputnik moment in the past, it would be the creation by Google's DeepMind of AlphaGo, the AI program that beat the best players at the ancient Chinese game of Go, and which we covered at length in the article Main AI Concepts. That moment has also raised awareness in China itself, and it may have been the catalyst for the current sustained Chinese push into AI technologies.




National security is a Type 1 concern, and national security depends on economic security. The focus of the current Administration has been on economic competitiveness. In itself this focus offers great benefits. But AI is becoming a disproportionate part of that economic competitiveness and alarmingly, one has a sense that the Administration does not grasp this fact at the appropriate level. To his credit, President Trump has identified the economic insecurity that has gripped parts of the country, and has successfully translated that intuition into political power. The economic results of the Administration have been reassuring, although those parts of the country that needed an economic boost the most have not actually seen the fruits of the economic good times. In an ironic twist, the one state that has seen the most of the economic boom, does not approve of the President's conduct and has received the least support from the President; California has risen during this Presidency to be in itself the 5th largest economy in the world, surpassing the United Kingdom and France, countries with much larger populations than California. It is not insignificant for our discussion that California also happens to be at the forefront of AI development and the policies surrounding it in the U.S..

A good part of political energy in the U.S. has been spent with the Mueller investigation. It was highly unlikely from the very beginning that there was an organized collusion between the 2016 electoral campaign of then candidate Donald Trump and the Russians to help elect him to the presidency of the United States; some less serious and uncoordinated collusion by some individuals did take place, obviously, but a coordinated collusion would not have made much sense. (Obstruction of justice, the other point of the Mueller investigation, is a different matter, but as it is not related to this paragraph, we will leave it untouched.) At the same time, we have lost track of the more important question of what to do about preventing attacks by outside forces, using very sophisticated AI like Cambridge Analytica, into our computing infrastructure and especially social media AI systems such as Facebook, which attacks have been well documented by both our intelligence agencies and Facebook itself. Does the prevention of such attacks need coordinated government insight, legislation, support, and oversight, or should the high-tech companies be left to defend for themselves? We will argue that the tension and mistrust between Silicon Valley and the current Administration cannot continue, in either direction. And turning this mistrust into cooperation is far more important than whether there was some attempted collusion with Russia by individuals other than the President.

On the other hand, the military disengagement from various parts of the world, which the Trump Administration has pursued, is an important positive. There is no reason to spend dollars, which are imperatively needed to support a competitive AI, without any tangible benefits. The real or just perceived attempt to develop a friendship with Russia is also a positive. The recognition that Chinese trade needs to be dealt with is essential, especially when trade involves high-technology items. The immigration policies on the other hand are nothing short of a disaster, and we will dedicate many paragraphs to that point in the section below.




Maybe the conclusion to draw, within the context of our national security, is that no matter how all these factors weigh with us individually, it is in our collective interest to see that the Presidency of Donald Trump is a successful one. Part of that success is to oppose the President's appetite for authoritarianism and disdain for our allies, because that appetite and disdain will definitely not result in success. More importantly and more visionary, part of that success will be a push for an increased awareness of the enormous challenges which will be brought about by the rise of AI and for a substantially larger financial investment in AI projects of national interest.



AI in the U.S. Cannot Function without Immigration


Immigration is also a Type 1 concern, i.e., urgent and powerful. The issue of building a wall with Mexico, as a way to control illegal immigration has been diverting much attention which could be spent elsewhere. Illegal immigration has being perceived as the source of low skill labor and a threat to a part of the U.S. population that would be performing those jobs otherwise. This debate goes on without any consideration that soon these low skills jobs may not exist at all for humans, whether American citizens or legal/illegal immigrants; those jobs will be done by AI-driven automated systems. The development of AI systems for agriculture and real-estate construction for example, has been well under way for some time. The real problem is the problem of job displacement by AI, not by illegal immigrants. Illegal immigration has not been resolved for many years and it will most likely not be resolved before it becomes irrelevant.

(Here is something a bit more daring for the reader to consider. We have seen the projections for GDP of China, US, and India in a video above. The GDP was clearly related to population levels, and because of AI, it will be even more related to population levels. Based on these correlations, one could make the strong argument that we should be more afraid of the situation when people would not want to come here, not when they would. Now let's look at a futuristic scenario. In 2030, you wish to immigrate to the U.S. You will need no papers to bring with you, other than some identification card, a passport for example. You show up at the border or you land at an airport in U.S., and you walk into this glass scanner similar to the ones used at the security check. The machine asks you a few carefully selected questions and in 5 minutes you exit. The exit has two doors, one which says "Welcome to the U.S.", the other one says "Sorry and Good Luck". Now, one of this doors is to the baggage claim and taxi area, the other one loops back to a gate for a flight back home. Only one of them is open of course. Is this weird? Not if you realize how much information can be known about you already through existing AI techniques, based on your voice, your face, your irises, your body language, etc.; we are not even alluding to the machine doing an fMRI scan of your brain, which as we have seen in the section section about fMRI in the AI Versus Human Intelligence article, will pretty much know everything there is to know about who you really are.)

The big problem is that the immigration fight has created the impression that the U.S. doors are closing to all immigrants, not just the illegal immigrants. Those of us working in AI have seen that perception take root even among people who have arrived here on legal visas. If highly skilled people from around the world, especially those with AI-related skills, choose to stay away from the U.S., this would represent a national emergency, because our colleges and universities produce only about 10% of our overall need.




Here are some examples of misguided policies of the current Administration, which policies sacrifice the need for the U.S. to stay ahead in the technology race for the political goals of pleasing those who believe that immigrants are taking jobs away from American workers. In the vast majority of cases, jobs who would be filled by immigrants go unfilled.

In May of 2018, the Administration (Department of Homeland Security) announced that it intends to eliminate the International Entrepreneur Rule (IER) because the department believed that it represented an overly broad interpretation of parole authority, lacked sufficient protections for U.S. workers and investors, and was not the appropriate vehicle for attracting and retaining international entrepreneurs. That rule originally established a new regulatory process and criteria for certain foreign entrepreneurs to be paroled into the country and seek U.S. investment to develop and grow businesses. It was meant to allow immigrant entrepreneurs to live in the U.S. for two to five years if their startups met certain benchmarks of success. A Washington Post article reported that the IER, scheduled to begin in July 2017, would have benefited approximately three thousand entrepreneurs.

A number of changes to the H1-B visa were the result of so-called "Buy American and Hire American" order directing the agencies involved with the H-1B visa program to suggest reforms that will implement it. The Trump administration revoked a guideline issued in 2000 which designated the position of computer programmer as a protected occupation under the H-1B visa program. This reduces the opportunities for talented foreign software engineers to join the U.S. workforce. While the administration framed the policy change as safeguarding entry-level STEM jobs for American students, the current situation is that there are many programming jobs that cannot be filled with domestic supply. According to Code.org, there are more than 500,000 open computer science jobs, but our colleges and universities only graduate about 43,000 students with computer science degrees per year.

Finally, the administration is making several small changes to the core H-1B application process which make the process slower and make it more difficult to get a visa. The administration terminated the Interview Waiver Program, and increased wait times at U.S. consulates, which are now tasked with interviewing all H-1B visa candidates. It has also suspended a program which allowed applicants to pay for fast-tracking their visa-granting process from the usual 3-6 months to 15 days. The Administration has increased the number of cases referred for "administrative processing", a label which leads to delays in the application process. And of course, the result of all these restrictions, delays, and uncertainty the Administration has added to the H-1B visa process is that H-1B applications have been steadily declining.

New enrollment of international students at U.S. universities declined by more than 10% between the 2015-16 and 2018-2019 academic years, according to this report, and fewer foreign students have been given F-1 student visas, which enables them to work in the U.S. for a short period of time immediately after graduation. And it was reported by The Financial Times in 2018 that the Trump administration considered banning all Chinese students from the United States. "Stephen Miller, a White House aide who has been pivotal in developing the administration’s hardliner immigration policies, pushed the president and other officials to make it impossible for Chinese citizens to study in the U.S.," according to the newspaper. All of these measures point to a dangerously naive interpretation of what U.S. national security means.

Nearly two-thirds of the researchers in science and technology are immigrants. Some of these researchers are also university professors in their country of origin. These researchers, as well as other AI specialists and programmers, are developing the technological backbone which will allow the U.S. to reach its national security goals, lead to trillions of dollars of economic growth, and improve/save lives by revamping our healthcare system. These are precisely the people who will help us win the AI race and add substance to the objectives outlined in the President's Executive Order on Maintaining American Leadership in Artificial Intelligence.

Immigrants are responsible for 39 percent of the Nobel Prizes won by Americans in chemistry, medicine, and physics. By acting on unfounded immigration insecurities, the Trump administration is losing sight of the principles and practices that have made America the world's most prosperous nation. In the name of Buying American and Hiring American, this administration is threatening the intellectual heart of our society, and the consequences of these policies will reverberate throughout our economy. We need to expand the visa programs, for all professionals working in STEM fields, and especially for AI specialists, if we want to prepare the U.S. for the new AI economy, and ensure our continued prosperity in the coming decades.




For immigration to be looked at as a national security concern given the dire need for skilled workers in the AI field, we will need leaders who will understand that falling behind in the AI race is a very dangerous option. We need political leaders with some curiosity and knowledge about AI and especially the impact of a huge loss of jobs, and not just blue collar jobs but an even larger number of white collar jobs. The problems will be complex and they will involve issues like a Universal Basic Income and the re-evaluation of the social worth of various professions. One may think that such political leaders are hard to come by, but this is not so. We already have had such an example of a leader who showed interest in AI and spoke insightfully about it. Notice in the following video President Obama's mention of the emerging concept of augmented intelligence, the idea that ultimately we are not facing an either-or proposition, but that we will need both intelligences together, with AI being an extension of the human intelligence.




On the international level, the world is becoming bipolar both economically and with respect to AI development, with the U.S. and China creating a gap between the two of them and the rest of the world. This polarity brings in a series of new questions, and hopefully also the realization that Cold War-like search for dominance will not be the right approach in the presence of AI systems that function close to human level. And the realization that the two countries will have to design cooperative policies in the field of AI and political entities supporting those policies. We are dedicating a full article to the emergence of this new bipolar world, AI In a Bipolar World.

It is a big mistake to look at the Chinese technological advancement as coming exclusively from copycat methods. It is true that much of Chinese technology started by copying the West. But that was then. Right now there is much innovation and an entrepreneurial spirit in China, which will indisputably bear consequences in the AI race. Just as China's progress now seems to reach new heights, there is currently legislation in the U.S. to hinder China's access to AI research. That is not a good idea, because the U.S. has benefited enormously from open research, coming from within and from without. Policies regarding the investments and the development of AI by private companies are needed to protect them from espionage, yes, but the research into AI is better left open.

Stephen Hawking makes a number of interesting points, very much related to our topic. First, notice that the points are being made at the GMIC conference held in China, and as we make repeatedly the point, some form of cooperation with China will have to emerge. At 14:22 in the video, Hawking says that history has been mostly a study of stupidity, now that we are entering an era of intelligence. Secondly, watch the questions at the end of the interview, coming from some of the Chinese leaders in science and technology. It is very interesting for us how at 23:05 Hawking is being asked by Shoucheng Zhang (whose presentation at Stanford University on the relationships between AI, Quantum Computing, and Blockchain, we will see later in the Identity and Truth article) about what he would write on the back of an envelope; Hawking answered that he would write Gödel's incompleteness theorems, with which you are hopefully somewhat familiar after reading the Foundational Questions article.




It's time to summarize before we tackle specific topics in the following articles. Let's hope that awareness of AI issues in the U.S. will continue to grow to the point where our government will be awakened to take a bolder view on AI. If we had all our wishes met, what would these policies and regulations achieve? We would strengthen our national security, re-train our workforce towards new AI-driven jobs, assist the best AI scientists and engineers from all over the world to immigrate to the U.S., fund AI projects that maintain the U.S. technological edge, strengthen the collaboration between industry and government, promote fair competition with China, while keeping open access to research, protect capital investments in AI from espionage, and strengthen the digital identity of our people across all (private or governmental) services while giving them control over the security/privacy/convenience degree with which they want to access these services. It's a mouthful and the rest of the website tries to increase its nutritional value.