AI: most powerful weapon ever


By Adrian Zidaritz

Original: 02/09/20
Revised: 10/13/20, Revised: 06/15/21




It started with rocks and sticks, and it never stopped since. History teaches us that when humans developed a new technology, that technology would be invariably weaponized. And so it is that AI is being weaponized right now. The continuous progress of this weaponization of AI is an open international endeavor. Russian President Vladimir Putin has stated his belief that World War III will be fought with AI weapons, and there is no reason to doubt him; moreover, he's obviously not the only one preparing for that possibility. The consequences of this arms race in AI are hard to fathom.




AI is the ultimate technology to be used for weapons, in many respects superior to nuclear, biological, or chemical weapons. We can assume that a sufficiently powerful AI system will have learned how to disable the opponents' nuclear, biological or chemical trigger mechanisms. It would not have to do it by a physical connection (wired or wireless) to those mechanisms. It could simply influence the minds of the humans in charge of those mechanisms. Calls to contain the galloping race towards AI weapons have been and will continue to be insufficient. Even calls to ban the creation of completely autonomous AI weapons, i.e., weapons that have no humans in the decision loop, although enjoying wide support in the AI community, have not produced the desired results at national levels. The AI weapons race is fully on, and the fate of the pale blue dot hangs in the balance:






The U.S. Defense Industry Is Way behind Silicon Valley in AI


What exactly does this mean? The fact is that the U.S. government is no match for the massive spending by the tech giants in AI research and development. Does this mean that the U.S. government should give up and rely entirely on the good will, the business acumen, the survival instinct and the cooperation of the Silicon Valley giants when it comes to national AI defense? This is clearly not the situation in China, where the government can and does dictate cooperation with the tech companies when it comes to national defense. Do we need a massive Manhattan-style government project to maintain an edge in the applications of AI and quantum computing to national security? We will see at the very end of the article that some smaller initiatives do exist, and they are to be applauded, but they are nothing that compares with the scale of the efforts in China. Let's listen to Boeing's VP of Autonomous Systems (including Autonomous Weapons of course).




The disparity in AI funding by the government creates new economics for weapons development by the defense industry which economics, paradoxically, are accelerating the need for AI. It is cheaper to produce and operate autonomous weapons, than to continue to produce and operate weapons that expose their human handlers to harm. It is also politically much more appealing for states to use weapons that do not involve humans, because casualty rates among humans would be reduced, therefore lowering the opposition of ordinary citizens to their use. What's not to like about fighting wars, if they are being fought by machines instead of people!?


Not only is there a gap in funding between the tech giants and the government, there is also reluctance by the tech community to participate in military related projects. We will see below that Google has had, as a result of internal backlash by its employees against Google's participation in the technology used in the controversial military project Maven, to publish a set of principles in June of 2018, that essentially committed Google to not participate in any AI projects with military uses. Microsoft has taken a far more accommodating stance in its work with the government and fortunately, even Google's position has mellowed out recently into a more reasonable stand. But this reluctance will surface again for sure, not just at Google but throughout the high-tech industry, and the ideological conflict with the current Administration does not help. Part of this reluctance also comes from Edward Snowden's publication of some NSA confidential information. Those published documents showed a previously unknown level of cooperation between the intelligence services of the five anglophone nations (U.S., Canada, UK, Australia, New Zealand), which raised concerns about government-sanctioned privacy violations. Regardless of whether one considers Snowden a hero or a villain, the positive service (or the negative damage) has been done and we will have to deal truthfully with the consequences of that service/damage. It does not appear that the high moral ground that the high tech companies are projecting when presented with the Snowden leaks matches their own cavalier treatment of privacy issues. Under the weight of external threats and hopefully enough internal political interest and participation of ordinary citizens, the government and the high tech giants will be eventually encouraged to cooperate and establish a framework that would establish the best compromising balance between the security needs of the government and the privacy needs of law-abiding citizens. It would be in our national interest if the framework of that cooperation was started sooner rather than later and be openly debated in congressional hearings. That framework could offer the means leading to forms of cooperation with other nations using liberal democratic forms of government.

Realistically though, as long as we have countries with competing ideologies and competing agendas, we will have AI weapons. We can only ban these weapons through a different political structure, some kind of a world institution, but right now that's unfortunately utopia. Moreover, we will make the point repeatedly that these autonomous versions of soldiers and weapons are not the most worrisome AI weapons, that weapons which attack our national graphs of data would bring far worse destruction. But before we dwell on those more sophisticated systems, let's look at the more mundane autonomous AI weapons of today:






AI on the Battlefield


We continue to look at this lower level of AI weapons, the ones used on the standard battlefield: drones, robotic soldiers and autonomous versions of tanks, airplanes, etc. These lower level weapons have already been developed and deployed. For example, we learned from the media during the war in Afghanistan that the U.S. possessed a new, secret and deadly technology for hunting down terrorists and that those weapons were becoming better with each passing day in the battlefield. No one really knew much about that technology, but it appeared that the weapons were learning on the field! Guess what that novel secret technology was?




Since then the usage of AI in the military has become pervasive. Crew-less tanks have appeared, crew-less fighter jets are being tested, and even crew-less naval ships. In April 2016, Sea Hunter was launched, a U.S. Navy vessel whose purpose is to search for enemy submarines; it can skim the oceans for many months in one mission, and of course it has no people onboard. The vessel has completed sea trials in 2017, in particular it has traveled between San Diego and Pearl Harbor on its own. Swarms of such vessels could be deployed worldwide and equipped to not just report the location of enemy submarines, but be able to attack them if so ordered by human operators from the base. This is part of a new military doctrine, facilitated by the rise of AI, in which large and expensive ships, carrying many crew members, are seeing their duties being spread over swarms of inexpensive such ships, like the Sea Hunter.




U.S. is not alone in developing those kind of technologies or preparing for them. Every industrialized nation is experimenting with AI weapons. The potential military consequences of AI developments cannot be overlooked by any government. Both China and Russia have openly announced plans to operate large unmanned submarines in the world’s oceans, especially within disputed areas such as the South China Sea. Just like Sea Hunter above, these vessels could travel long distances and not be detected for long periods of time. As an example, China announced that a prototype of such unmanned submarine completed a record 141 days, 3619 km trip in December 2018.

Although in the West, the development of AI weapons assumed that in cases of life and death, the decision will always require human approval, in Russia and China, there is less anxiety about using completely autonomous systems. Both China and Russia are actually quite open about the military uses of AI; here is some work done at the Russian Foundation for Advanced Research Projects:




Some recommendations for a U.S. national security policy towards AI are contained in the report Artificial Intelligence and National Security by Greg Allen and Taniel Chan. The authors analyzed four prior cases of transformative military technologies, namely nuclear, aerospace, cyber, and biotech, then made conjectures and recommendations based on these lessons to the case of AI.



Human versus AI, in the Decision Loop


The battlefield dynamics will become much more complex with the growing use of AI drone swarms, in which multiple unmanned vehicles exercise autonomous control. When it comes to AI drones fighting AI drones, Western policymakers are generally OK to let unmanned systems make their own decisions. But when it comes to killing enemy humans, the U.S. Defense Department policy requires that a human must remain "in the loop". That policy will become ever harder to manage however, particularly if an enemy's own automated AI systems are making such judgments at much faster speed than human speed.

Moreover, it should not come as a surprise that in some cases the decision made by humans will be far inferior to the one that an AI system would make. The video below, leaked by Bradley Manning, shows the humans misidentifying photographer cameras as AK-47 weapons, and Reuters journalists as enemy fighters. This paragraph is not about Wikileaks or Julian Assange or Bradley Manning or about whether their actions were justified or treasonous. The point this paragraph is making is that an AI system could have been used as an assistant to the human decision, as the technology to identify objects like AK-47 machine guns already exists. In fact, as we repeatedly assert in our articles, that hybrid form of intelligence (human + AI) is the one most likely to benefit us in the long run. This conclusion is just as valid on the battlefield, where neither humans nor AI systems have, when working without support from the other, a monopoly on good or bad decisions. (It was not an easy decision to insert that video here, it is upsetting from all points of view. You can see though how easy it was in that environment to make that error. The clip is graphic and age-restricted, so you'll have to watch it on Youtube, and only after consenting to it.)






Are humans always in the decision loop?


Autonomous weapons, designed to kill without any human in the decision loop should clearly be banned. If you work in the AI field, you may join the effort to ban autonomous weapons at autonomousweapons.org. Over 4500 AI researchers have signed the letter already. If you are still in doubt why this ban is needed, watch the following video, but beware that it is graphic.




We saw that the U.S. Defense Department requires, anytime loss of human life is possible, that a human be part of the decision making process. But is this the case with other nations? Could there be AI systems that are designed with exactly the opposite restriction, namely that no human should be involved? What if such a system could wipe us all out? Well, such a system already exists and is currently active in Russia. After the U.S. announced its withdrawal from the INF Treaty in February of 2019, Russia has re-activated its Dead Hand system, an AI system that gathers data from various sensors and launches a coordinated attack on the U.S. without any human in the loop. What are those sensors? Seismic data pointing to a nuclear explosion on Russian soil, disabling of certain communication channels used by humans to control Dead Hand, increases in radioactivity levels, overpressure sensors, etc. There is no going back after Dead Hand takes over, no way for a human to override it. Now, it does not take long to figure out that such an AI system is very difficult to test. The possibility of misinterpretations of data coming in from all these sensors is far from being 0. Such AI destructive weapons will become more common as countries design better and quicker AI systems to neutralize the old ones. This AI race could very easily spiral out of control. Listen to the explanation of why Dead Hand expressly bans any human from the decision loop, and you will see why it may make sense from a Russian point of view:






AI-Based Cyber Attacks


We will now move from the AI on the battlefield towards bigger targets for AI warfare, namely cyber attacks. Cyber attacks have been around every since we started doing business in cyberspace. Despite their image, many hackers are talented, curious, and well educated people. They exchange information on the dark web, and continuously learn new ideas and new techniques. Breaking into various systems is for many of them an irresistible intellectual challenge; breaking into the defense, or intelligence agencies, or big banks are among the biggest of prizes. And moreover, on the dark web today, anyone can buy a tailor-made virus guaranteed not to be detected by the 10 or 20 or so major antivirus tools. AI will change all this bad boy cuteness, but for now let's enjoy it:




Malware and identity theft kits are relatively easy to find and inexpensive to buy on dark web exchanges. AI-enabled attack kits are on the way, and we can expect that they will be readily available but at higher prices in the next few years. But institutions are increasingly using AI to defend against such attacks. AI has been the most exciting technology used in cybersecurity, for the following reason. Most security systems, before the advances of AI, have been focused on recognizing attacks based on the signatures that these systems had stored in their database about attacks that have been identified in the past. Think of a signature as a pattern associated with an attack on a computer or a computer network. It can be a series of bytes in a file, or a sequence of bytes in the traffic going into a port on a server, or an unauthorized attempt to gain access to a resource within a computer or on the network, etc. Anyway, it is a pattern stored in a database, which is being checked before access is being granted to a party requesting it. These signatures were the core of cybersecurity systems until recently.

But the so called zero-day attacks, the most dangerous attacks, which are mentioned a few times on our website, happen before security analysts have a chance to study them and develop a signature for them. And that is where AI techniques are being used with increasing success. We can think of the signatures as labeled data, i.e., humans have studied the attacks and labeled them with a signature. So the AI used for zero-day attacks in based mainly on unsupervised data, and unsupervised learning algorithms are used to detect unusual patterns in this data. Because of the increased sophistication of the defenses and increased knowledge for AI based attacks, this means that attackers will have to have larger budgets, and competition will harden. So the attacks of the future will be done mostly by state-level enterprises or very organized crime groups, not by intellectually curious individuals. So the cuteness of the video above will vanish in the age of AI, when we will be left to face the state actors and the organized crime actors.

In May 2018, the New York Times reported that researchers in the U.S. and China had successfully commanded artificial intelligence (AI) systems developed by Amazon, Apple, and Google to do things such as dial phones and open websites without the knowledge of the AI systems' users. It's a short step to more nefarious commands, such as unlocking doors and transferring money. And while Alexa, Siri, and Google Assistant may be the most widely used AI programs in operation, they are hardly the only ones. That's worrisome because more financially stronger attackers already cause plenty of damage without AI. Consider headlines over WannaCry and NotPetya as well as cyberattacks against Equifax, Uber, Aetna and Deloitte just over the past two years. The U.S. blamed a hacking entity known as Lazarus Group, which works on behalf of the North Korean government, for unleashing the so-called WannaCry cyber attack. It crippled hospitals, banks and other companies across the globe.

This addition of AI power to cyber attacks is a huge problem for the U.S.. We have a severe shortage of Data Scientists and a severe shortage of Cybersecurity experts. So the shortage of people with both skills, which would be needed to develop effective defensive systems, is of compound severity. To make matters worse, the U.S. population is a prime target for such attacks because of the perceived wealth of the U.S. citizens, by comparison with most of the rest of the world. But that is not all, the U.S. is lagging behind in its adoption of e-government and especially more secure ways of accessing data. Our current graphs allow access based on passwords, which is a completely insufficient method of verifying access. If you gain someone's Facebook password, or their LinkedIn password, you have access to a wealth of information. The situation with the U.S. social security number is even more scandalous. If your SSN is known, it opens all the gates to the governmental graphs. The damage that can be done, and it is being done, is severe, especially for the older citizens. These identity related issues are too important to skim over, so we analyze them in more detail in the article Identity and Trust.

Among the most damaging cyber attacks are the cyber attacks on the industrial infrastructure of an entire nation. These attacks can incapacitate power grids, mess up the air traffic control, change train switches to unwanted tracks causing crashes or derailments, sabotage nuclear or petrochemical plants, shut down plants, etc. In fact any piece of infrastructure that has at least one connection to the Internet is vulnerable to such attacks. The most known cyber attack targeting critical infrastructure was against Iran's nuclear facilities in 2010, and it shows the potential of such weapons:






AI Cyber Attacks on the national graphs


Just like any software service in cyberspace, social graphs are vulnerable to cyber attacks. Some of the attacks on the social graphs are still done by less sophisticated means. Here is a description of such an attack, affecting 2% of Facebook users, which in this case was probably designed with very small funds.




But as we shall see below, the sophisticated attacks on the social graphs have more powerful consequences at national level, say by comparison with an attack on banks or government sites. Let's look at two examples. A social media AI system, system A, has at its core a graph of people who discuss and share various data of common interest. It can be attacked by another AI system, system B, as follows. First, B will add new accounts to A, increasing the size of A's graph. It will then post data between the nodes of the subgraph that it has created. That data will represent a certain agenda that B wants. Once this is done, there are many ways to establish connections between the nodes of the old graph and the nodes of the added subgraph. The target system A will end up optimizing for goals that are now diverted and tilted towards B's agenda. To make things a bit more concrete, take A to be Facebook and B to be Russia's intelligence services. Russian intelligence services see AI as an indispensable tool to manipulating information in cyberspace, with suspected so-called troll farms believed to gear up for another automated social media feeds to sow disinformation within the U.S. social graphs during the 2020 election cycle.

As long as the correctness of our social media software is verified with QA teams and not with formal correctness proofs, there will be bugs and there will be people who will exploit those bugs. So here is an exercise for you, the reader. Let's assume that the following very simple AI system is designed by someone sitting somewhere in eastern Pakistan. They found a way to hack into Twitter, and they are waiting for the opportunity to post a threatening tweet coming from the U.S. President. Now recall from recent news that private tech savvy operators are able to detect when Air Force One is in the air, which by the way will be hopefully dealt with appropriately in the very near future. They were able to detect the plane flying over the United Kingdom on its way to a surprise visit to the troops in Iraq. The AI system we talk about simply monitors, by scanning media reports and whatever other sources of information it may want, the level of tension between the U.S. and Russia. Say that the level goes above a certain threshold, maybe an incident at the border with Ukraine or the Baltic states. The system waits for an intersection of the two events, that the level of tension is above the threshold and that the Air Force One is in the air. At that moment it posts the following tweet:




Probably the first thing anyone would do would be to check if the 2 events in the tweet are true. Are we in the middle of a tense period with Russia and is Air Force One in the air? You will be able to get a confirmation, from media sources, which will immediately buzz with the news coverage, or from your friends and relatives. What would you do in that situation, and in more general terms, what other consequences of that tweet are possible? The whole exercise is to show the destructive power with very inexpensive means such an attack on our social graphs would have. That allows us to move on to the next level of attacks.

National graphs are physically housed in large data centers. Physical security at these centers is taken very seriously and we have already seen examples of this type of security in the background article Graphs of Data. Let's refresh that viewpoint with one more case in point, this time using Google's data centers as an example. (We will take this type of physical security for granted for the remainder of the article.)


An AI system is a software system that optimizes for a set of given goals; the success of this optimization is dependent on the size and the quality of the data being fed into it. Data being of such increased importance for an AI system, it is also its Achilles' heel. Software can be attacked in many ways, but an AI system can be now attacked in a new and much more powerful way: by modifying the data that it learns from. It is very difficult to defend against this new kind of attack, because that data usually comes from many different sources, without being curated or controlled by any single authority. And so the truthfulness of this data is of central importance if our AI systems are to develop knowledge that is beneficial to us humans. So part of developing an AI system has to take into consideration the security of the data sources on which the AI system operates.



Russia and our national graphs


(Tomorrow, 06/16/21, Presidents Biden and Putin meet in Geneva. In a widely quoted position, President Biden stated that his objectives are for a 'predictable and stable relationship with Russia'. There are many reasons for mentioning these objectives in our context. In what follows below, we had in the original version some strong words of criticism regarding the Russian interference in the 2016 US elections. That criticism is still valid and we leave it unchanged. But we should carefully emphasize that the criticism is by no means directed at the Russian people but rather at the intelligence services within the Russian government that orchestrated the 2016 interference.

With the hope that the meeting tomorrow will start a new period of mutual respect and cooperation and will lead to a peaceful resolution in Ukraine, we insert a clip which is dear to Russian audiences and with it we remember that the Soviet Union, which at the time included Ukraine as its most important republic after Russia, lost about 27 million of its people in World War II, civilian and military.)




In the article Graphs of Data we introduced the national graphs, the idealized merging of all the social graphs with governmental graphs and financial credit graphs and medical graphs. We can look at China's Social Credit System as the equivalent national graph in China. It should be clear that these graphs will be primary targets of cyberattacks, both from within and from outside of national boundaries. The components of the U.S. National Graph (Facebook being the most notorious case) have already been under attack from the outside. We now know that the most interesting information attached to those graphs is obtained with AI techniques. The most potent way to attack a nation is by cyber psychological warfare, in other words attacking the national graph. Where does Cambridge Analytica fare in this?




After Senator Dianne Feinstein gave us a full review of the relationship between Professor Aleksandr Kogan, Cambridge Analytica and the Russian Internet Research Agency in St. Petersburg, it is worth pondering over the full extent of the Russian interference in the 2016 U.S. elections and its more technical aspects. As we will see next, this action had links with many more parties, like the RT and Sputnik media outlets, and the Russian government itself.

A technique called psychographic profiling and segmentation, was applied to the data obtained by Cambridge Analytica from Facebook users, first to profile U.S. voters based on different personality traits, values, attitudes, and interests. (These profiles contained very personality-specific items such as personal neuroses, with the full intent on targeting these individuals' fears and hopes with microtargeted information.) Then the profiles were segmented into targetable groups, for the purpose of serving these microtargeted pieces of information through online ads, especially in the battleground states. There is a good reason for understanding all these aspects of the interference, as it is highly unlikely that this was just an one-off experiment and will not be repeated in the future.




In 2008 an organization called "Kick them all out project and Fire them all campaign" stated its goal: "The project to restore sanity and accountability to the U.S. government by eliminating the two most corrupting influences, the corporate lobbyists and the career politicians that have turned the government over to them." The fact that the Congress is viewed negatively by a majority of Americans is no secret, but such feelings will certainly be targeted by outside interests, who are extraordinarily versatile in psychological warfare, the former KGB being among the most notorious users of it:




Now imagine what the results of implementing this seemingly old-fashioned ideological/psychological warfare using AI techniques would entail! We have already seen above that the U.S. security agencies have clearly identified such incidents during the 2016 elections, the main social platform under attack being Facebook. It is also clear that the U.S. agencies have been dissecting the elements of those kinds of psychological attacks, trying to catch up. But if you thought that the video above about psychological warfare is outdated, its main ideas are not. As we have pressed many times before, AI will be playing the central role in this psychological warfare. Here is Dr. Fiona Hill, the former Senior Director at the U.S. National Security Council for Russian and European affairs, testifying in the November 2019 House hearings for the impeachment of the President and explaining the more general Russian intent of delegitimizing not any particular candidate, but rather delegitimizing whoever the winner of the 2016 ended up being.






National Security Commission on AI


The John S. McCain National Defense Authorization Act for Fiscal Year 2019 became public law on August 8, 2018. The bill established the National Security Commission on Artificial Intelligence (NSCAI) and requested that an initial report be submitted by the commission within 6 months. The initial report has been published and the full interim report is now available at on the NSCAI site. The interim report identifies 5 areas recommended for increased focus by the U.S. government with respect to AI: R&D investments in AI, national security applications of AI, training and recruiting AI talent, protecting and building upon U.S. technical advantages in AI, and promoting global AI cooperation.

According to the report, potential threats in the AI domain include: the erosion of U.S. military advantage, risks to strategic stability, the diffusion of AI capabilities, disinformation and the undermining of the nation’s democratic system, erosion of individual privacy and civil liberties accelerated cyber attacks new techniques that could open up vulnerabilities, and accidents. How does this report relate to the Executive Order on Maintaining American Leadership in Artificial Intelligence of February 11, 2019? The executive order applies to a larger set of organizations, not just the National Security Commission on AI. For example, the White House’s Office of Science and Technology Policy has released of set of principles that all various agencies must meet when drafting AI regulations. Here is the draft of those principles, on which the public is invited to comment.

The Department of Defense in November created the Joint Artificial Intelligence Center (JAIC) to speed up the use of AI in military operations. The agency plans to provide $1.7 billion in funding to establish the AI center over the next five years. The new JAIC website will contain a wealth of data about the future directions of military uses of AI and suggesting to the reader to refer to the JAIC website for further information is a fitting end to our discussion.



Conclusion: Back to Character Fortification


The establishing of the NCSAI and the JAIC are steps in the right direction, but a more comprehensive and unified approach to the challenges of AI, inclusive of its effect on the job market and the economic security of people, seems to be also needed. The image of opposition to developing AI for military applications does not fully capture the multitude of opinions in the AI community about future cooperation.



We have seen in the previous sections how psychological warfare can be unleashed on us and how research can be brought to bear on our limbic system, especially in our most vulnerable of times, so do we have a way to fight back and not depend solely on governmental action? History shows us that we do, and the idea of human character fortification is needed again; the same old answer has yet to lose its punch. Whether faced with a job loss or faced with external threats of war, humans repeatedly find out that fortifying not their castles, or national borders, but their own character, is the best way to defend themselves. Here is a concise motivation as to why this inner fortification is important, coming from none other than the senior officer who oversaw the operation leading to bin Laden's death: