To what extent will the employment of Artificial Intelligence in Remotely Piloted Aircraft Systems help us overcome the existing vulnerabilities in modern warfare?

“Predicting the future isn't magic - it's artificial intelligence.”

— Dave Waters

As we are standing at the threshold of a new arms race, it is hard to believe that the father of Artificial Intelligence (AI), John McCarthy, envisioned its employment in remotely piloted aircraft systems (RPAS) when he first coined this term at a conference in Dartmouth College in 1956.[i] In the same way that the space race took place between the former Soviet Union and the United States of America during the Cold War, today—whether we like it or not—a new ‘Drone Race’ has captured the world’s attention.[ii] Compared to the Cold War, this race is no longer between the East and West but has emerged globally. Arguably, this global phenomenon is rooted in the effectiveness of RPAS in solving security problems from a distance, projecting air power within a cyber environment, and ensuring minimum collateral damage.[iii] In essence, warfare has become increasingly remote in the twenty-first century and has expanded in all three levels of command. Consequently, many states or even non-state actors such as the Islamic State of Iraq and the Levant (ISIL) have scrambled to acquire advanced RPAS.[iv] The formation of the “Unmanned Aircraft of the Mujahedeen” in 2017, for example, increased alertness over the need for imminent responses, not only among the states who have led this race such as the United States, Israel, China, Russia and Turkey, but also NATO. Although the shift to AI remains somewhat secretive for most states, NATO has already defined this area as a priority for its relevance to defence and security.[v]

Literature Issues

Considering that both AI and RPAS are the two major emerging technologies of our decade, we must accurately define these terms in order to understand both these fields.[vi] Unfortunately, since both these emerging technologies are applied in many scientific fields such as remote sensing, medical robotics, agricultural activities, engineering, and the military, they lack established, unambiguous definitions and demarcations. Though for other scientific fields this is a minor issue, from a military point of view, it is not. No matter how advanced a piece of military equipment is, in all cases it must operate under a legal framework in accordance with the principles of just war theory. The lack of definitions for AI and RPAS creates a sort of bias concerning their legitimacy. Their implications are not apparent, and the legal framework defining their concept of operation has yet to exist. As far as AI is concerned, the American roboticist and robot ethicist professor Ronald Craig Arkin claims that the most challenging step in understanding autonomous systems lies in identifying what is and what is not an autonomous system.[vii] Specifically for unnamed aircraft, in many cases, the existing academic literature provides different controversial terms depending on the utility, weight, and endurance of the aircraft, etc. However, the purpose of this article is to emphasize the utility of AI in RPAS and provide evidence as to why AI is a necessary tool in unmanned aviation and modern warfare, too. Hence, despite the wide use of the term ‘drone’, I will use the term ‘remotely piloted aircraft system’ (RPAS) throughout this article for two reasons. Firstly, by definition, the word ‘remotely’ implies that these aerial platforms rely on human inputs, and no matter how advanced they are, the pilot is always in command—albeit on the ground. Secondly, the effectiveness of AI in RPAS lies in employing AI not only for airborne platforms but also for all subordinate systems that help leaders make decisions. These systems include sensors, the exploitation software analysts use on the ground, the communication system in the cyber sphere, logistics, maintenance, and interoperability with other manned or unmanned vehicles.   

CONOPS for Autonomous Unmanned Aircraft Systems 

In a recent interview, Major D. Papathanasiou, a former RPA pilot and the first Deputy Commander of the Intelligence Support Squadron at NATO Alliance Ground Surveillance Forces (NAGSF) stated that the driving force to set an effective concept of operation (CONOPS) is a fusion of two critical elements. Firstly, the knowledge of all the technical characteristics of the aerial platform based on manufacturer’s specifications and under all weather conditions. Secondly, by studying the performance of this aerial platform through the records of each flight and statistics over an adequate period of time, leaders can make vital assessments concerning the effectiveness and limitations during flight. Years of operating RPAS have helped identify not only the pertinent skills required to effectively operate unmanned platforms but also to record a vast number of vulnerabilities these platforms have in the battlefield. The world elite in RPAS manufacturing, such as the United States, Israel, Russia, China, and lately Turkey have already employed advanced RPAS in campaigns over Iraq, Afghanistan, Libya, and Nagorno-Karabakh for intelligence, surveillance, reconnaissance (ISR), and targeting purposes.[viii] As a result, it is more feasible for these states to identify the existing vulnerabilities of their RPAS, such as the MQ-1, MQ-9, RQ-4, and TB2, which have performed in the battlefield and employ AI solutions in overcoming these vulnerabilities.                                                                                               

According to Major Thomas Meeks, a former MQ-1 Predator pilot, it makes sense to separate technical skills from judgment skills in order to understand the autonomy of RPAS.[ix] On the one hand, the autonomous flight is already in existence, with algorithms taking control over pilots' technical skills such as taxing, taking off, navigation, and landing. For example, the RQ-4 Global Hawk is an advanced fully autonomous platform that runs a series of functions automatically. The office of the Federal Register, responsible for providing the public with regulations, orders, documents, and legal notices issued by the US government, describes the RQ-4 Block 30 as capable of conducting fully autonomous operations once programmed by the crew on the ground beforehand.[x] On the other hand, when it comes to judgment skills, whether a target is legitimate or not, the replacement of a human decision maker by a ‘machine learning algorithm’ raises ethical concerns. The degree of prejudice is so high that quite often we hear that AI will transform RPAS into lethal autonomous weapon systems (LAWS). Though the term hides a sense of irony, this is a very overly bold and sweeping assertion, since there is no level of autonomy for RPAS. Thus, the need for understanding fundamental principles of autonomy and lethality concerning RPAS is growing, since there is little empirical research that supports this fusion. It is essential for everyone engaged in this field to understand what autonomy is. Thus, it will be easier to understand what lethal autonomy is. In the same way that there are levels of autonomy in other emerging technologies such as in medical robotics, it is important to define equivalent stages of autonomy in the unmanned aviation arena. This is the only way to define the level of autonomy we are willing to accept and to add legitimacy or reject it as a society. Particularly for the targeting process, in a recent interview organized by the Young Professionals in Foreign Policy at NATO, Michael Callender, the Head of Airspace Capabilities Sanction, made an important point: All NATO member states follow the same targeting process based on the ‘Law of Armed Conflicts’, which respects human rights.[xi] In essence, all the targets assigned to RPAS belong to ‘target lists’ supervised by legal advisors and policy makers; none of them are opportunity targets as seen in action films. As a result, there is no practical difference between whether a strike will be waged by an operator or an autonomous unmanned aircraft as long as human hands control the ‘target list’. However, ethical issues still arise whenever collateral damages take place.                   

Employing AI in ISR and Targeting Missions

Casting into a new era with new threats such as terrorism and cyber, there is an insatiable demand for intelligence gathering from commanders and immediate responses from political leaders. Unfortunately, the way in which current ISR capabilities are convened from RPAS is a highly time-consuming process that involves a vast number of participants and delays decision-making processes.[xii] In the new Drone Race, it is abundantly clear that this race is not only among nations but an internal competition, too. Different manufacturers such as Lockheed Martin, Northrop Grumman, General Atomics, and Boeing produce different types of RPAS, each of them performing different characteristics even within the same force or alliance such as NATO. For example, there are different types of RPAS operating at the tactical, operational, and strategic levels among the Alliance. While orchestrating such as a formation on the battlefield is a challenging step for any decision maker since different RPAS demonstrate different capabilities, AI systems could provide solutions in a couple of seconds or hours.

Following the analysis process on the ground, RPAS have their own independent collection, processing, exploitation, and dissemination systems. The majority of the analytical tools that exist today follow a manual pattern that cost time and human resources. The employment of AI in suitable systems on the ground can bypass many standard operation procedures (SOPs) the staff is obligated to follow during a mission. For example, there is no need for operators to work in shifts, imagery analysts or surveillance operators to fuse hundreds of raw images with multiple sources of intelligence, or even for Mission Directors to supervise the mission. In all these steps, a significant factor that plays a crucial role is the training status of the personnel or the experience of the current staff. In a multinational environment such as NATO, things are more complicated since personnel from different countries have different professional backgrounds and levels of experience. Instead, the shift to AI or ‘machine learning’ more specifically is expected to bring excellent training and economic value to all the levels of command within the Alliance. The system may acquire the advanced capability to train humans engaged in the process and the aircraft itself in conducting warfare.[xiii] Since companies develop RPAS with revolutionary capabilities, we may witness swarms of tiny or huge RPAS flying in Combined Air Operations (COMAO), too. However, the level of autonomy required for a swarm flying safely and avoid collision is definitely very high.

In the realm of AI, ‘artificial consciousness’ is the desirable level of autonomy required to pass Turing’s test—or in order words, its intelligence behaviour should be indistinguishable from that of a human. By following this approach, we will avoid singularity, the theoretical point where AI will enable technological growth to get out of control and its implications will be irreversible.[xiv] From a human resources perspective, ‘artificial consciousness’ can overcome existing vulnerabilities that significantly affect the outcome of a mission, including fatigue, fear, stress management, finance, logistical support, weather conditions, and human error. From an analytical point of view, the unmanned platform could interact directly with ‘exploitation software’ and identify the targets on the ground based on the shape, dimensions, and shadows, whether it’s day or night, as well as find other intelligence reports from other agencies which may record the order of battle. In all these processes, the more the human factor is engaged, the more likely it is to find errors throughout the chain of command. For ISR missions, this might play a less significant role; but when targeting is the primary task, human error may raise ethical concerns regarding collateral damages and the legitimacy of the target. In both cases, a significant error can cause political turmoil.

Arguably, employing even more sophisticated RPAS that meet the highest standards of distinction and proportionality may bring a heady mixture of excitement to strategists at the strategic level. Unfortunately, this does not usually happen at the tactical level. Indeed, RPA such as the MQ-9 and MQ-1, even in their brief lifespan, have exhibited outstanding performances on the battlefield. Their role in providing close air support and air interdiction to troops fighting on the ground is significant. However, issues such as accurately identifying the target or the fundamental right of adversaries to surrender, if they wish to, is a significant issue that raises ethical concerns and remains an open question. Unfortunately, the world has been replete with examples of incidents such as strikes that cost the life of non-combatants. For example, despite the odds, during its very first flight in 2002, a Predator killed an Afghan scrap metal dealer who looked like Osama bin Laden.[xv] In  response to some articles  published in the Washington Post a few days later, the Pentagon claimed that the target was not innocent, and therefore, the attack had some justification.[xvi] Nevertheless, even if the person was guilty, this does not mean that operators should act as judges in the courts. Moreover, the Pentagon never mentioned if the action was a pre-emptive strike against an imminent threat. If that was the case, actions like this might be legitimate, since international law gives states the authority to act to prevent attacks.[xvii]

For suspect targets who are not included in ‘target lists’, the ‘artificial conscience’ of an autonomous aircraft that could operate and monitor its targets for days could create doubts about the legitimacy of the targets on the ground, since they will have to operate under a legal framework. By monitoring their targets for days, autonomous RPA will become increasingly ubiquitous, and they will be able to invent whole patterns of life for their targets. Then, by having access to subordinate systems on the ground or via the internet, they will be able to make proposals to decision makers if further intelligence is required for their identification. Additionally, unique software should be capable of progressively developing interoperability with the whole autonomous system that supports a mission, such as satellites, databases, and weaponry.[xviii] At the final stage, the system should deliver the final authorization to the decision-maker by sustaining the accuracy of the latest available intelligence. When political leaders decide to execute pre-emptive strikes against imminent threats, they risk being accused of commencing a preventive war. Hence, the limits between these two actions are sometimes blurred. However, under extreme circumstances, political leaders make decisions to employ offensive tactics that often result in civilian casualties or other collateral damages in residential compounds. These actions, which often are characterized as a ‘necessary evil’, raise criticism by the broader public. As a result, an autonomous system with ‘artificial consciousness’ may raise concerns about the use of lethal force against specific individuals or even reconsider the attack’s plans, suggest alternative solutions, and present potential implications for every action.

For those who may claim that the implications of such a strike are catastrophic, we should keep in mind that modern RPAS cause minimum collateral damage, because their current capability only allows them to carry out low-scale strikes with light payloads such as Hellfire missiles. However, this might change soon. It may not be realistic to imagine dubious regimes such as Hezbollah or terrorist groups such as ISIL carrying out unmanned missions with nuclear weapons loaded underneath the fuselage. However, a scenario in which adversaries can drop chemical weapons and carry out deadly attacks should put member states on constant alert.

The employment of AI against cyber threats

A decade ago, John Naughton, professor of public understanding of technology at the Open University and senior researcher at the University of Cambridge, described the question of how to counter a cyber-attack as the five-hundred-million-dollar question.[xix] Ten years later, cyber-attacks are no longer merely a rare phenomenon coming from individuals. In many cases cyber-attacks have turned into cybercrimes coming from terrorists, hacktivists, or even states.[xx] Although an attack with cyber roots was just a misdemeanour at its commencement, nowadays, the implications of such an action may lead to severe threats for national or international security. At such a level, most of these cyber actions are not executed by individuals but from groups of skilful and well-trained professionals who work under a well-planned strategy. In general, just like RPAS and AI, not much is known about cyber threats since the barriers of geography, legislation, and ethics are still debated.[xxi]

The case study of the American MQ-170, which was literally brought down by the Iranian ‘cyber forces’ in 2011, is a unique incident to which all stakeholders in the unmanned aviation industry should pay attention, because history in many cases tends to be cyclic and not linear. Undoubtedly, a decade ago, the technology behind the sophisticated MQ-170 belonged to the world elite, in contrast to existing RPAS today. Although Iran’s cyber forces had no history of violent actions on record, its cyber-attack was a victory. Iran managed to acquire American technology by ‘fighting’ anonymously in the cyber domain through jamming aircraft and satellite signals. At least, this is what the Iranian government claimed when it successfully landed the MQ-170 in the city of Kashmar.[xxii] Hence, Iranian forces possibly managed to decode classified technology and project the advanced cyber capabilities of the West. By employing AI in RPAS, its ‘artificial consciousness’ will not need navigation support by GPS satellites, and any effort from adversaries to jam its link will not flourish. With AI, an autonomous unmanned aircraft will be able to fly on its own, possibly with terrain recognition or even with visual meteorological conditions (VMC). In a worst-case scenario, if that MQ-170 was armed, the hackers would have one more option to think about: e.g., whether to fire its ammunition against its normal operators as retaliation or even to strike civilian installation such as hospitals, residential areas, schools, or governmental agencies. In this case where the RPA had no ammunition underneath its fuselage, dubious regimes or terrorist groups capable of developing similar cyber capabilities could acquire the RPA and guide it to suicide missions in residential areas, too. Even if an RPA is not armed, in the wrong hands it still remains a threat. Undoubtedly, the public reaction in both cases would have spurred massive opposition to the existence of RPAS. As a response to this incident, the ‘artificial consciousness’ of an autonomous unmanned aircraft should be able to filter its navigator’s intentions before any strike occurs. Also, based on the legitimacy of the targets, an AI system should never be able to authorize a strike against civilian installation with the risk of civilian casualties being almost certain. Hence, it should be impossible for adversaries to take advantage of the full capability of the RPA even if satellite communications are jammed. The former Deputy Secretary of Defence of the United States, William Lynn III, claims that “in the twenty-first century, bits and bytes will be as threatening as bullets and bombs.”[xxiii] Therefore, according to NATO Allied Joint Doctrine for Cyberspace Operations, it is essential for strategists and decision makers to understand that a state’s vulnerability to cyber operations is related to its dependence upon cyber operations.[xxiv] Despite some notions that cyber has no physical dimensions, we may witness cases where a cyber-attack can cause physical damages on a scale equal to an air strike.

 

In sum, following the invention of gunpowder and the successful splitting of the atom, we are standing at the threshold of the third revolution in weapons development.[xxv] The components of this revolution are the fusion of AI and RPAS. The advent of RPAS paved the way for minimising many vulnerabilities RPAS demonstrate on the battlefield, though it did not eliminate them. There is still room for greater efficiency. AI is a promising technology that requires a better understanding of all avenues of its application to modern warfare. AI experts coupled with strategists must come together and define a set of fundamental principles coming from empirical research and not from lengthy debates, which normally leads to nowhere. Otherwise, it is always easy in hindsight to identify lessons like in the two previous revolutions.

Notes

[i] John Kelly, Ellis Horwood, and Hemel Hempstead, Artificial Intelligence: A modern Myth (UK: Cambridge University Press, 1995), 11.

[ii] Billy Crone, Drones, Artificial Intelligence, & the Coming Human Annihilation (Nevada: Get A Life Ministries, 2018), 118.

[iii] Houston R. Cantwell, “Controversial Contrails: The Costs of Remotely Piloted Foreign Policy,” Joint Force Quarterly no. 68 (2013).

[iv] Nicholas Grossman, Drones and Terrorism: Asymmetric Warfare and the Threat to Global Security

(New York: Bloomsbury Publishing, 2018), 119.

[v] NATO, “NATO releases first-ever strategy for Artificial Intelligence, October 22, 2021, accessed on January 17, 2022, https://www.nato.int/cps/en/natohq/news_187934.htm.

[vi] Tarun Chhabra, Rush Doshi, Ryan Hass, and Emilie Kimball, Global China: Assessing China's Growing Role in the World (Washington, D.C.: The Brookings Institutions, 2021), 194.

[vii] Ronald C. Arkin, “Military Autonomous & Robotic Systems Considerations for the way forward from a UK military perspective,” April 16, 2013, 52.

[viii] Houston R. Cantwell, “Operators of Air Force Unmanned Aircraft Systems,” Air and Space Power Journal 23, no. 2 (2009): 72.

[ix] Richard A. Best, “Intelligence, Surveillance and Reconnaissance (ISR) Acquisition: Issues for Congress,” CRS Report for Congress, June 15, 2010, 10.

[x] Defence Security Cooperation Agency, Federal Register, Volume 78, Issue 3, January 4, 2013, 701.

[xi] “Michael Callender Interview: NATO: Autonomous weapons systems,” interview by Elisa Cherry, September 14, 2021, YouTube, https://www.youtube.com/watch?v=ivwrynCeKFI.

[xii] Mike Ryan, Baghdad or Bust: The inside story of Gulf War 2 (Barnsley: Pen and Sword Books, 2003), 28.

[xiii] Clayton Schuety and Lucas Will, “An Air Force Way of Swarm: Using Warm gaming and artificial Intelligence to train drones,” War on The Rocks, September 21, 2018, https://warontherocks.com/2018/09/an-air-force-way-of-swarm-using-wargam....

[xiv] Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Penguin Books, 2005), 11­–14.

[xv] Doug Struck, “Casualties of U.S. Miscalculations,” The Washington Post, February 11, 2002.

[xvi] Ibid.

[xvii] George P. Fletcher, A Crime of Self-Defense: Bernhard Goetz and the Law on Trial (Chicago: University of Chicago Press, 1990), 20.

[xviii] Ariel Avitan, “The Differences Between UAV, UAS, and Autonomous Drones,” Percepto, accessed Jan 3, 2019, https://percepto.co/what-are-the-differences-between-uav-uas-and-autonom....

[xix] John Naughton, “How do we counter cyber attack? That's the £500m question,” The Guardian, October 31, 2010.

[xx] Marco Roscini, Cyber Operations and the Use of Force in International Law (New York: Oxford University Press, 2014), 2.

[xxi] Tiago Cruz and Paulo Simoes, “18th European Conference on Cyber Warfare and Security,” ACPI July 4,

2014, 94.

[xxii] Nicholas Grossman, Drones and Terrorism: Asymmetric Warfare and the Threat to Global Security (New York: I.B Tauris, 2018), 106.

[xxiii] Roscini, Cyber Operations and the Use of Force, 1.

[xxiv] AJP-03, “NATO Allied Joint Doctrine for Cyberspace Operations,” NSO Edition A V1, January 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploa..., 4.

[xxv] William Bunn, “The Challenge of Lethal Autonomous Weapons Systems (LAWS),” ODUMUNC 2021 Issue Brief for the General Assembly First Committee: Disarmament and International Security, https://www.odu.edu/content/dam/odu/offices/mun/docs/1st-lethal-autonomo....

Image: https://www.nato.int/cps/en/natohq/news_120429.htm

This publication was co-sponsored by the North Atlantic Treaty Organization.     

Rafail Georgiadis

Rafail Georgiadis is a fusion analyst in the NATO Alliance Ground Surveillance Force (NAGSF), Sigonella, Italy.

https://www.linkedin.com/in/raphael-georgiadis-370319176/?originalSubdomain=it
Previous
Previous

The Atlantic Vision for an AI Future

Next
Next

NATO and Russia: Is there space for dialogue?