Government versus High-Tech:
Confront and Cooperate


By Adrian Zidaritz

Original: 02/09/20
Revised: no

Everything in this article is predicated on the following extraordinary picture. Between 2009 and 2019 a reordering of the top 5 most valuable companies in the world now has all the 5 U.S. high-tech giants at the top. No one could have foreseen this in 2009. It should come as no surprise though that these 5 are also the ones making the largest investments in AI. Below we will discuss the tenuous relationship between these 5 companies and the U.S. government with respect to consumer privacy, the need for regulatory oversight, and the potential breakup of some of these 5 companies through new antitrust laws.




For whatever it's worth, we will argue against the breakup, especially if one looks at the other interesting part of the picture, namely the appearance of 2 of the Chinese high-tech giants among the top 10 companies: Alibaba and Tencent (who are also among the heaviest investors in AI in China). Not only is there no talk in China about breaking up their most successful companies, the news lately have been that the Chinese government will take at least a 1% stake in each of the Chinese high-tech giants and government representatives will sit on the boards of these companies. Moreover, China's governmental expenditure plans in AI are dwarfing the U.S. government plans. The basic argument in the U.S. regarding the race in AI has been that our dominance in AI will be assured by our high tech giants, and no large-scale funding by the government is necessary. Does it make sense then, given the intense competition in AI with China, to break them up, and focus exclusively on antitrust and other internal U.S. matters? Or should the alternative of cooperation in search for appropriate regulation be nurtured by both sides?

To be fair, the tenuous relationship has experienced strains coming from the other direction too. The mishaps related to the military Maven project (whistle-blown by Bradley Manning), the mishaps related to the NSA surveillance program (whistle-blown by Edward Snowden) have fomented a mistrust of government's intentions among the employees of the high tech companies. Following are 2 videos we selected that show this bidirectional unease. We should also mention that both the Maven project and the NSA surveillance program have been terminated meantime, so hopefully some realignment of trust and cooperation will take place.



Less discussed was also Google's willingness to bend to Chinese conditions for doing business in China. Larry Ellison presents a very clear picture of where our priorities should be. Google has been going out on a limb by refusing to cooperate with the U.S. military while on the other hand accepting to run under censorship in the Chinese market. As we have mentioned repeatedly, the U.S. government does not have the AI budgets that the tech giants have, and our only chance of competing with large government-controlled investments in AI is for our high-tech to cooperate with the DoD, not with Chinese censorship: (YouTube) Google's push into China is "shocking": Oracle’s Larry Ellison.



Consumer privacy protection


Most people who have an online presence in the U.S. are noticing that something beyond their control is happening with the data that they enter on various sites. They see with unease that their digital twin seems to know more than they would like it to. AI systems are constantly processing the data stored in these digital twins to derive additional insight into their behavior. In many instances, if you are very active online, and make recordable choices, this digital twin already knows more about you than most of the people close to you, like friends and relatives.

For example, in a 2015 article, Computer-based personality judgments are more accurate than those made by humans, researchers have established some numerical measures of how much of a person's personality is revealed by their Facebook Likes. The average Facebook user produces around 250 likes. Compare this number with the following numerical measures that the article has calculated, namely that Facebook could more accurately predict the subject’s personality than a work colleague by analyzing just 10 Likes, more than a friend with 70 Likes, more than a family member with 150 Likes, and a spouse with 300 Likes.




The concern about consumer privacy protection is a Type 1 concern. After we watched the video This Will Change how You See Everything in our first background article, we realized that the concept of privacy has changed and that maybe we are not aware how much about us is known to the outside world. We can turn our phone to airplane mode and it still tracks us and stores information about us, ready to send it to the apps that are running in the background when we reconnect. The only way we can completely protect our privacy is if we turn our phone completely off. But that does not seem like a good solution. In all software matters, there has always been a fundamental trade-off between convenience and security. It is not an easy trade-off to solve and it will involve politics and will. China has forged ahead without much concern about privacy and its citizens have accepted this fact, doing most of their transactions on their mobile phones. Judged from the outside the system seems to have great benefits. But this will obviously not work in the West.

Europeans have been at the forefront of creating legislation protecting the privacy of their citizens, while still being able to enjoy the conveniences brought by new technology. AI systems have a deep problem currently, we do not know why they make the decisions that they make. The quest for a better interpretability of their results has been stronger in Europe; the European Union's General Data Protection Regulation has been passed. It is probably the most forward thinking regulation regarding AI, it allows users to see how algorithms that handle their data come to their conclusions. The issue of interpretability of the results of AI systems has made progress in the U.S. as well, but more as an initiative of high-tech than governmental oversight. Notable among such efforts is IBM's OpenScale project.

The issue of profiling users without their explicit consent is also being looked at by governmental agencies in Europe. In May 2018, the European Union has started to enforce a set of regulatory rules aimed at protecting consumers from this unwanted profiling activity. These regulatory rules refer to personal data collected by Internet companies, basically requiring these companies to describe in simple language how they use this data. The idea is that by collecting personal data and applying AI techniques to this data, important private characteristics of users can be deduced.

The answer to all these concerns about privacy does not lie in the closing of our Facebook account or turning our phone off. The answer does not lie in fearing Facebook and Google, their products have greatly enriched our lives. The answer lies in establishing through legislation and policies, the correct mechanisms of cooperation between government and private enterprises that allows YOU   to control what is being collected about you and who it is being shared with. To a large extent, those mechanisms of control were already in place, but they were not easily understood by and explained to the general public, and they were not part of a serious political debate. In a much deeper sense, the AI models that are being created about us and about our likes and dislikes are not easily interpretable even by the people who produce the software behind these models, let alone by the people who use them. Obviously, something needed to change and indeed changes are coming. In California, beginning on January 1, 2020, you now have a right to know what is being collected about you and the right to say "no", the right to object to certain type of information being collected about you and shared with others. It is by law:






Governmental Regulatory Oversight


What regulatory approaches are available for the government in the case of AI? Because of its wide reach into all human affairs, AI presents completely new challenges for the government, and the needed regulatory approaches are already bumping against the lack of availability of lawyers needed to implement them. Law schools will have to produce graduates who will specialize in AI issues. The regulatory responses to any new disruptive technology have been seen before, and some general experience must have accumulated. And indeed such experience exists, here is a presentation of the 3 major responses to a disruptive technology, and a technology with lots of unknown aspects.




As we have already alluded above, there has been a sense of invulnerability in the U.S. regarding AI, a general feeling that the AI giants (Google, Amazon, Facebook, Microsoft, Apple), are so far ahead in the worldwide AI race, that no governmental efforts are necessary to protect the data and the intelligence that they collectively accumulate about the U.S. population. In the opposite direction, we have been seeing a lot of noise around the refusal by groups inside these high-tech companies to collaborate with the government or having their work perceived as having military defensive applications. This may or may not be the case, but nevertheless, given the efforts that China and Russia are making to develop AI-based weapons (and by weapons we do not mean just equipment; the disruption and manipulation of our social networks could be easily turned into more lethal weapons, not just election interference), these questions should be brought up and debated politically. Let's start by listening to a very sensible argument that neither extremes (one extreme being no governmental access to national systems run by AI companies and the other extreme being a complete loss of privacy versus the government) is good.




In the section above we discussed the pull-push conflict between convenience and privacy/security. What about the conflict between convenience and national security? The U.S. government has been lagging behind in its approach to national security as it relates to the convenience of social media. The 2016 election interference has been partly based on the convenience of logging in to Facebook and creating as many fake accounts as one wished. There was no barrier to creating a fake account on Facebook. Those accounts then posted opinions related to the U.S. political life, touching on very sensitive issues, like fear of Islamic terrorism, fear of immigration, fear of socialism, and so on. Their mission was to sow confusion and mistrust. As we will see in a dedicated article, AI: Most Powerful Weapon Ever, the Russian security services have a long history of success in targeting emotions through propaganda, without doubt the most efficient such operation in the world. We are beginning to wake up, but not in proportion to the magnitude of the problem:




This article aims at gaging awareness about AI issues and making a small contribution to maybe raising that awareness. It is difficult the appreciate the magnitude of the changes coming ahead from the coverage in the media. What happened with the Facebook attacks should alert us that the openness of our society, wonderfully healthy as it is, exposes a vulnerability which is very deep and can be taken advantage of in very inexpensive ways. At the same time, while awareness of AI issues is not where it should be in the general public, it is quite high in the business world. Almost all businesses are thinking or already incorporating AI into their plans and products. This awareness will drive the use of AI ever higher, and the need for the government to step in and provide regulatory oversight should be clear.

The discussions about the need for governmental regulations have been also buttressed by some eye opening failures in AI development. All software systems, not just AI systems, do occasionally suffer catastrophic breakdowns. But AI systems that break down, or act in unintended ways, open up a new whole category of such failures, with unforeseen consequences. Some of these failures are severe like the accidents with self-driving vehicles, others are being hyped-up based on unreasonable fears, as in the case of the Facebook bots inventing a new language of their own and having to be shut down.

The need for the government and high tech to work together has received more attention since the Obama Administration's National Artificial Intelligence Research and Development Strategic Plan published in October of 2016, 48 pages of what should have been a promising beginning. And Congress is slowly starting to catch up; you will see below a clip of the first congressional hearing on AI, and a clip of the most recent hearing, held in June 2019 on the topic of AI and deepfakes. The first clip is trimmed after the opening statements of the two Senators and before the testimony of the panel of experts, while the second is trimmed the opening statements and includes the opening statements of the panel of experts, but the open question and answer following it. But the reader might choose to watch the entirety of these proceedings because they are full of information relevant to our subject.



It is reassuring to see that the concern over AI and its role in the U.S.-China relations has made its way into Congressional committees. This was already visible at the first congressional hearing on AI, held in month of 2017; there is something Senator Cruz says, namely that "intelligence, artificial or otherwise, is not something we deal with often in this Congress", which is good-natured humor, but there is something more there, given the dismal approval ratings that Congress has enjoyed for so long with the American people. While the substance of the hearing is of course given by the 4 technical witnesses, let's not underestimate that at least the 3 Senators and accompanying political figures have taken the time and the interest in holding this first hearing. It was the beginning of an exchange of ideas between technical people and politicians which set a good precedent. Here is an intense exchange representative of the concern with monopoly and regulation, showing that government and high tech can find common ground and that effective regulation is something both sides would welcome.






AI and Governmental Surveillance in the US


The data that accumulates in the high tech graphs is what is behind the wealth of these companies, and sometimes it feels like we are being monitored while producing this data, without our consent. The video below even looks at this as a new type of surveillance, surveillance capitalism. It is a concept well worth pondering over, as we are thinking of regulatory oversight. But this surveillance capitalism sounds a bit too nefarious, the tech companies are only manipulating your consumption of the products their customers are selling, so the word surveillance is too strong for characterizing the business model of the new high tech.




This example also shows the degree to which your Facebook digital twin knows you. (We should also add that your Facebook profile has many other sources of information: your postings, the postings you read, the groups you belong to, etc.). Facebook is a service that you access via a secure login. But your browsing history is available for AI processing without any such protection, from any device you use to browse the Internet: your desktop, your laptop, your smart phone, your TV, etc. Data broker firms can build a profile of you which is even richer than the profile Facebook builds based on your Likes and your other actions on the site. This data is being sold to advertisers, and there are no real regulations governing the usage of this data. We have seen above that the political will to produce these regulations is beginning to form, on both the government side and the high tech.

But there is however a bigger danger that these digital twins (your profiles built by various entities), have already been exploited by malicious foreign entities and could continue to be exploited. These are the seeds of a different kind of war, a most lethal AI-based war. We will analyze that potential usage of psychological manipulation of the nodes in these AWI graphs as an offensive weapon in the following article, AI: Most Powerful Weapon Ever, but in this article we focus on the government versus AI within the U.S.

One question in many people's minds is whether the U.S. government through its intelligence agencies is spying on the American people and using AI techniques to build complex profiles of our citizens. Edward Snowden has been at the forefront of the argument that the answer is yes. You may consider Snowden to be either a hero or a villain but our purpose here is to judge this question independent of that opinion. Let's start by listening to Snowden.




Snowden offers us a unique opportunity to exercise our critical thinking, because he is right about some things and wrong about many others. Let's begin our analysis by recognizing that an NSA surveillance program did in fact exist and the seeds of it were in the response to the 9/11 terrorist attacks. The goal of the program was to prevent such attacks in the future, and no one can argue with that goal. We will discuss at length in the article AI: Most Powerful Weapon Ever that there is a big problem with only having responses to attacks based on the signature of previous attacks. The so-called zero-day attacks are new attacks whose signature is unknown, so the methods of detecting such attacks must be different than those based on signatures, and increasingly AI is being used for these new methods.

Based on this need to detect attacks before the damage is done, it would then seem quite understandable, seen from NSA's mission of defending this country against terror attacks, that profiles of our citizens would represent a pretty good solution. By continuously monitoring those profiles, suspicious activity can be detected and acted upon. Is it possible that if such a surveillance program were in place before the 9/11 attacks, that the 20 terrorists preparing the attacks, plus the entire infrastructure supporting them would have been detected? Yes, in fact it would have been close to impossible NOT to detect at least some of them.

The massive NSA data center in Utah was built with huge storage capabilities, estimated to be enough to store all the phone conversations (and other forms of communications) in the U.S. for tens of years. Now, let's mention that the surveillance program has been terminated and you can google for articles presenting this termination. Let's also mention that the monitoring was passive, meaning that data about our communications was stored in the center, but nothing was done with it. The idea was that only if suspicion about someone would arise, then, under proper judicial procedures, a warrant signed by a judge would be issued for that stored data to be used and a profile of that individual be built on demand. Ordinary citizens really had nothing to fear, but the issue is not dead and we'll have to debate it properly.

Edward Snowden was the most public of the NSA whistleblowers, but hardly the only one. One big problem we are having with this kind of solutions is that stored data can be searched by people OUTSIDE of any judicial proceedings and without any authority to do so. A sitting President of the United States (Richard Nixon) was impeached and had to resign for this kind of extra-judicial spying on his political opponents, so matters will have to change. President Trump's relationship with our intelligence services is the most dysfunctional of any past Administrations, which does not seem to be the basis for a healthy debate. But at the same time, the President is right to harbor a suspicion that these services are acting outside of their legal responsibilities. Some "deep state" suspicion is well-founded, given what we know now about the NSA surveillance program. But it would be more constructive if the President would cooperate with Congress to institute a bi-partisan legal framework for the review and revamping of methods used by our intelligence services. Obviously, the U.S. democracy cannot survive if the NSA is working unchecked. Should the executive order 12333, originally signed by President Reagan, and amended by Presidents Bush and Obama, be reworked? Is it OK to allow access to the data stored about us to governmental agencies other than the NSA, the CIA and the FBI? All sorts of questions like these need to be debated properly. (I had to apply the ideological Turing test to myself in order to include a Sean Hannity podcast, because there aren't many things on which I agree with him; but this time, his points are the best I could find)






AI and Governmental Surveillance in China


The situation is much different in China, our main competitor in the coming years, economically and especially in AI. As an AWI system, the Chinese Social Credit System will optimize for a set of goals. What exactly are these goals? They are both economic and social, and they revolve around the ideas of trust, honesty, and integrity. The system aims to raise the trust level between citizens and the government, between businesses, and between individuals. One has to understand that China has suffered for a very long time from a lack of these attributes in almost all aspects of the Chinese society. The same goals were achieved in the U.S. by establishing strong democratic institutions, but China does not have those institutions, or a tradition of democracy, so it has to approach these matters in a top-down manner, which obviously appears to be heavy handed. Fraud and corruption are much more widespread in China than in the U.S. and the urgency to do something about it is very different. Of course, nothing can prevent the government to use the system for its political dominance of life in China, and for squashing dissent in the name of maintaining social order. And indeed, we now know that the government does that:




But at the same time one cannot deny that, even with that Communist heavy handed approach, China has risen at an astonishing pace and it now puts food on the table of 99% of all its citizens. By all means, China has been a success story and it may be that its Social Credit System will end up being a success, after all the bugs, the excesses and misfirings, and its Orwellian features, are dealt with. The measure of that success cannot be done with Western standards. Moreover, most our immediate concern should be with the interface between that system and the West. Do we have to like this system in order to understand it? As a centralized, powerful AI, could it be that the Chinese Social Credit System will offer its citizens a larger degree of protection from the outside world? In that respect, the U.S. population has been left defenseless against outside forces. As witnessed by the daily hacking attacks on the Social Security system, banks, financial credit agencies, and even governmental intelligence agencies.