The Atlantic Vision for an AI Future

“The Mechanical Turk” was an early form of AI, invented in 1770. It became famous for its ability to play chess with equal skill to known champions of its time, and it even ended up playing against celebrities like Napoleon and Benjamin Franklin. It was exhibited around the world for almost a century until its accidental destruction in a fire in 1854. Why is this miscellaneous historical artefact relevant to an article about AI? Well, the Mechanical Turk was a hoax; the secret was that a chess player would play from inside the machine. Coincidentally, Mturk.com (short for Amazon Mechanical Turk) is the name of the “chess player” behind AI. Unlike the Mechanical Turk, which remained only an exotic attraction, AI is finding its way more and more into our lives through companies, governments, and services.[1] Thus, it is important to dispel any myths about it and proceed with perfect clarity about its role, limitations, ethics, etc., before it suffers a fiery demise like its predecessor.

AI is still far behind from being the pop-culture conception of an all-calculating supercomputer or benevolent guide, still remaining only an imitation of human intelligence. Because AI is used for tasks too complex to program wholesale, it must be trained through experience. That’s where Mturk.com comes in, providing temporary workers for the AI to learn from. Programmers tag the objects and actions necessary for the task given, workers from Mturk fulfil the tasks repetitively for up to a year, the combinations created by their work process are uploaded to the code, creating an AI that can now fulfil the same task while sacrificing precision for efficiency.[2]

Machine learning is a word we hear thrown around often. For outsiders to the field, the above example is what is meant by it. Machine learning has been key in the development of two pieces of software that form a key part of the modern internet: copyright/anti-piracy software and anti-fraud software. In the last two decades the United States has passed quite a few toothless laws to combat piracy, but the means to enforce them were lacking, to be charitable. Thanks to the AI developed and used by private companies such as Red Points, an affluent Barcelona-based company specializing in both copyright software and anti-piracy software, there is now a possibility of enforcing these laws. Combine this seemingly just service (after all, who doesn’t despise a thief?) with the aggressive acquisition of copyrights over many varying intellectual properties by companies such as 43 North Broadway, LLC, a young company whose stated goal is to acquire as many licenses and who currently owns 100,000 of such, and we have a situation where soon enough there will be a monopoly on content. This monopoly would be enforced instantly, with no human cost as its enforcement is done by an AI, and would leave no alternative but to pay endless, arbitrary royalties decided by its ruler—whoever it may be. Such a move would resemble an enclosure of the virtual world (not to be confused with the term digital enclosure, which denotes something else), similar to the enclosure that signalled the death of feudalism, with an erosion of the commons via privatization. Now, whether you think the above picture is dystopian or acceptable is not the point of this article. For good or for bad, the true power of AI is clear: It is a tool for rationalization. It turns inefficient, complex, byzantine patchworks into simple, hyper-efficient hierarchies. This potential should be pursued ceaselessly rather than the utopian vision of AI managing our lives by automating away difficult decisions, moral or otherwise.

With the massive interest of investors in AI and AI-related projects, there have been, predictably, projects and start-ups that never get past phase 2 of the machine learning process. Facebook was among the culprits, but the most interesting one was Expensify’s SmartScan software.[3] It promised to scan and assort receipts from their users into corresponding expense pools. Behind the scenes, the receipts were simply posted in an online forum, a severe violation of users’ privacy, where freelancers would complete the job at their leisure. Another interesting case is AI chatbots, used by many companies to manage tech support or retail services—however, nine times out of ten, this is pseudo-AI, meaning you are talking to a real person whose job it is to simulate being an AI.

Such scams are in no way useful for an international alliance with serious aims. Those companies use pseudo-AI for marketing purposes and to trick investors, which is an unfortunate necessity for them to survive and grow. But NATO does not benefit from marketing itself as “using AI” or relying on private capital to operate. It does, however, benefit from using actual AI in its weapon systems, logistics, data analysis, cyber defence, and more. Because of its constitution as an international alliance, it has the possibility to altogether avoid pseudo-AI. To leapfrog past this awkward phase of AI development, NATO created a unique strategy for the development and implementation of AI in its Defence Ministers’ Meeting of October 2021. In its summary we find six Principles of Responsible Use, which serve to curtail the negative effects of widespread AI usage. Those six principles are:

  1. Lawfulness

  2. Responsibility and Accountability

  3. Explainability and Traceability

  4. Reliability

  5. Governability

  6. Bias Mitigation

Each of these principles helps address one issue we can observe in AI development. Lawfulness will see to it that any application of AI will not in any way violate the natural rights of humanity. Responsibility and Accountability will make it impossible for AI to become culpable in any mishap. Reliability will establish clear purposes for each AI system, so as to maximise efficiency and prevent accidental overlap. The final principle, Bias Mitigation, fully captures the theme of this declaration; it is the idea that we can ultimately disregard AI outcomes if they disagree with basic truths of humanity. This is a way to decisively tip the scales in the relationship between an organization and the AI it employs, fully relegating the AI to a simple tool that has few “unintended effects” and establishing the human element as the sole decision-maker.

It is imperative to know what a tool’s purpose is before employing it. AI’s true purpose lies in cutting costs for mundane activities like data entry, where human presence is more hindering than beneficial. When humans are subjected to easy but repetitive, dull tasks we find their productivity actually decreases compared to the opposite conditions of undertaking difficult but unique tasks. This is the best possible niche for AI to fill, owing to its lack of humanity. The con to accompany this pro is that the nature of AI is such that expertise and the passing down of expertise becomes impossible. This poses a great threat. The only people who know how an AI works are the freelancers who created the data for it—those that you disposed of and are no longer working for you—and the programmers who fed said data to the machine—who have only a theoretical understanding of how it works. This means that any errors present in the code or any mistake committed by the AI is at best unfixable and at worst undetectable; and because AI is always learning, it is always making mistakes.

The above is addressed by the third principle of explainability and traceability. With clear mechanisms in place to make sure the inner workings of the AI are clear and understandable to its users, we can rest assured that there will be no situations in which the AI is making decisions we cannot explain.

Technological endeavours require long-term maintenance and the passing down of knowledge. Older engineers teach newcomers the tricks of the trade so that when the former eventually retires, the latter will be able to keep the project alive long enough to pass the torch to other newcomers. With AI, this passing down of knowledge is abstracted away. Because the AI is always learning, and the people who created it work on it on a temporary basis, there are a lot of possibilities for blind spots to develop where it becomes impossible to understand why an AI performs a certain action instead of another, or why in this manner and not another way. In fiction this is the cliché of the rogue AI that is determined to wipe out humanity, but in reality a rogue AI is an algorithm that we no longer understand conceptually. Thankfully, the consequences of the latter are merely headaches for its users and a lot of work to undo the mistakes it already did.

Now imagine if a branch of government services, say the issuing of birth certificates, social security numbers, and driver’s licenses was done by AI. No more long waiting lines, no more difficulties locating misplaced documents, instant identification verification by authorities becomes possible. In short, a lot less work and trouble for both the civilian population and the government. But, after five years of this new practice, a man comes to your office to complain. He has been issued a driver’s license that expires two days after being issued. Because the AI is supposed to be more accurate than humans, you dismiss him as a fraud. He comes to complain again. This time there is a new grievance; the man listed as his father on his birth certificate is not actually his father. The situation has affected him severely; he cannot drive to work and is shunned as a bastard by his family. This time you check and you see that the AI did actually make a mistake, but how can you fix it? A human can fix this in two minutes by just issuing the document once more, because if such a mistake had been made by a human it would be a self-acknowledged error. But when an AI makes mistakes, it is because it is reasonably convinced it did the right thing. The simple task of fixing this man’s documents becomes the great task of first finding out why your AI thinks his driver’s license should expire in two days and that his real father is someone else, and then correcting that mistake in the AI’s train of logic without compromising anything else it does in the process.

In this situation even a great logician like Kurt Gödel would be stumped, the only people qualified to intervene in such a manner would be those with intimate knowledge of the AI’s development—which, as we described above, are impossible to find. This is why important decisions must remain in human hands, or we might risk finding ourselves in a situation where we understand neither the rules we follow nor the decisions we make. AI is useful for simplifying the most basic activities of governments, corporations, organizations, or militaries. Its downsides are such that to gain maximum benefit from the widespread implementation of AI, it must be replaced every few years, applied selectively to fields in which human participation is inferior and must never be allowed in unique situations that require human judgment such as military or political affairs. This is what is meant by the principle of governability. Human leaders will remain the ultimate decision makers, with AI only being a tool to ensure that the options presented are precisely calculated as to their requirements and consequences.

NATO proposals for the implementation of AI in service of its goal of collective defence encompass mission planning and support, smart maintenance and logistics for NATO capabilities, data fusion and analysis, cyber defence, and optimisation of back-office processes.[4] It is important to note that NATO AI policy presents a triumph for Atlanticism as it incorporates the strengths of both the United States’ and the EU’s AI programs. The United States continues to value the military application of AI while ignoring civilian applications. The EU prioritizes the civilian application of AI while shying away from military uses. The effect of this has been that the United States has leapt ahead of other countries (bar China) in the development of AI-centred tech like drones, which have been effective in the Middle East but in the process have exposed citizens to a variety of abuses because of the deregulatory stance on civilian use—not to mention the scandals of accidental civilian casualties at the hand of said drones. The European Union, on the other hand, has strived to place the dignity of humanity over technological progress. While this is a noble goal, it has undoubtedly put them behind in the global AI arms race. Through a series of NATO discussions, the best of both worlds has been achieved in NATO AI policy, which will allow for cutting edge military developments in AI technology tempered by a strong focus on responsible use and protection of citizens’ data, democratic values, and human rights.

Notes

[1] Liz Sonenberg and Toby Walsh, “Artificial Intelligence Is Now Part of Our Everyday Lives – and Its Growing Power Is a Double-Edged Sword,” The Conversation, The University of Melbourne, October 10, 2021, https://theconversation.com/artificial-intelligence-is-now-part-of-our-e....

[2] Oscar Schwartz, “Untold History of AI: How Amazon’s Mechanical Turkers Got Squeezed Inside the Machine,” IEEE Spectrum, April 22, 2019, https://spectrum.ieee.org/untold-history-of-ai-mechanical-turk-revisited....

[3] “Pseudo-AI: When humans do bots work,” Mantra AI, https://mantra.ai/blogs/pseudo-ai-when-humans-do-bots-work/.

[4] Simone Roare, “Algorithmic power, NATO and artificial intelligence,” The International Institute for Strategic Studies, November 19, 2021, https://www.iiss.org/blogs/military-balance/2021/11/algorithmic-power-na....

Image: https://www.nato.int/docu/review/articles/2021/07/19/natos-innovation-ch...

 

This publication was co-sponsored by the North Atlantic Treaty Organization.     

Loand Baxhuku

Loand Baxhuku is a second-year student at UBT, Kosovo. He is interested in AI, governance, history, politics, sociology, and philosophy. His specific interests include nationalism, the European Union, Hegel, the Cold War, and colonialism.

Previous
Previous

NATO Support and Procurement Agency: Best Practices on Environmental Protection and Energy Efficiency for the Defence Industry

Next
Next

To what extent will the employment of Artificial Intelligence in Remotely Piloted Aircraft Systems help us overcome the existing vulnerabilities in modern warfare?