Will AI Replace Our Leaders?
Ever since Alan Turing developed ‘the Turing Test,’ better known as ‘the imitation game,’ the main objective, albeit contentiously, of Artificial Intelligence has been singularity, that is, the creation of a self-teaching system that can outperform human capabilities across a wide range of disciplines. Is this just a matter of time? According to one of the main proponents of singularity, Ray Kurzweil, Google’s Director of Engineering, 2029 is the predicted year in which AI will pass a valid Turing test, thus achieving human levels of intelligence, whilst 2045 is the year in which “we will multiply our effective intelligence a billion fold by merging with the intelligence we have created” (Galeon and Reedy).
Considering the recent results of the Stanford Reading Comprehension Test in which Alibaba and Microsoft’s AI answered a series of questions based on Wikipedia articles more accurately than humans, what does this mean for the state of humankind? (Chong). Indeed, if AI is to exist on par, and perhaps, exceed human intelligence, this begs the question as to who should be making the decisions when it comes to the governance of nation states.
According to Bart Selman, Professor of Computer Science at Cornell University:
“Humans are actually quite poor at making compromises or looking at issues from multiple perspectives…I think there’s a possibility that machines could use psychological theories and behavioural ideas to help us govern, and live much more in harmony. That may be more in harmony. That may be more positive than curing diseases—saving us before we blow ourselves up” (qtd in Talty).
Considering the current states of affairs between the United States and Russia and the near breaking point in relations between the two, having a neutral advisor to consult on the right and appropriate course of action, in this instance, for many, might seem like a godsend. Indeed, amongst some devotees of AI it has already risen to cult-like status with recent paperwork filed for a non-profit religious organisation called The Way of the Future whose mission is to “develop and promote the realisation of a Godhead based on Artificial Intelligence and through understanding and worship of the Godhead contribute to the betterment of society (qtd in Brandon).” This begs the question as to how much power we are willing to imbue AI with in this regard and more importantly, how willing are political leaders willing to accept the decisions made on their behalf. If Trump or Putin were to ignore the results of AI, who would enforce the verdict or ensure that ruling is obeyed?
Furthermore, how susceptible is AI to bias? Like all uses of AI and AI-assisted tools, the biases of the creators need to be carefully taken into account to ensure that an as-neutral-as-possible system could be produced and does not merely act as an echo-chamber for rogue decision-making behaviour. This, of course, begs the question: would you vote for an AI-assisted president? Do we trust AI wholeheartedly to solve problems as diverse as conflict, war, the drugs epidemic, economic recessions and depressions, homelessness, healthcare, etc? Moreover, how do we define leadership and what will AI-assisted leadership look like?
Management, Leadership, Authoritarianism
In order to determine whether AI can solve problems as complicated as the War on Terror, healthcare and economic recession, we need to identify the nuances involved in problem-solving and the modes and methods of governance involved in putting these into practice. According to Keith Grint, Professor of Public Leadership and Management at Warwick Business School, problems can be categorised as ‘tame, ‘wicked,’ or ‘critical,’ and these vary greatly in both the solutions sought and types of leadership required (12).
Tame problems, as the name suggests, are resolvable through unilinear acts and more than likely have occurred before, resulting in the formation of standard operating procedures. Many scholars believe that tame problems are primarily associated with management as these are issues and dilemmas that have been experienced before (déjà vu) and therefore procedures are more likely to have been put in place. Indeed, the straightforward nature of tame problems, or perhaps, elements of the procedure could be automated to enable decision-makers to spend more time dealing with both wicked and critical problems (Grint 12)
Wicked problems have no clear relationship between cause and effect and cannot be isolated and reintroduced to its environment without profoundly affecting it. These are primarily associated with leadership as individuals need to face the unknown and uncertainty that awaits them (vu jade) by transferring their authority to the collective in the hopes they can assist in addressing the problem. Thus, a leader must ask the right questions rather than provide the right answers and undoubtedly begs the question as to how AI can be programmed to do just that to aid in the decision-making processes. Will surveys, polling questions, etc. be needed to communicate with the collective? How broad in scope will these be? How will it ensure unbiased and prejudiced results? Will compassion and empathy be sacrificed in the fullest pursuit of rationality? (Grint 12)
As AI uses the process of regularisation (in which computer models learn how to make generalisations based on large amounts of data sets), will decisions suggested by the AI be determined on the basis of past historical events such as the Holocaust, the bombing of Hiroshima, the Rwandan Genocide, etc? Considering that AI has written new verses in the same vein and tone as the bible, could it possibly provide policies, legislature, etc. that specifically addresses the problems at hand?
On a more sinister note, will AI-assisted decision makers exploit the use of AI to ensure that the implementation of their favoured solutions? As the Historian Margaret MacMillan argues:
“We can learn from history, but we can also deceive ourselves when we selectively take evidence from the past to justify what we have already made up our minds to do” (Age of the Sage).
Grint, however, argues that the greatest irony of leadership is that when it is deemed most necessary, it is most often avoided and Grint provides the example of global terrorism, a wicked problem that requires long-term and collaborative leadership processes, an approach which is unlikely to get the official in question elected (14). Perhaps, then, if used correctly, a neutral and unbiased AI could be used in the initial stages of the long-term and collaborative leadership processes to provide the unpopular, albeit necessary solution on which to base possible compromises and more nuanced strategies.
Lastly, critical problems are presented as a crisis with very limited time for decision-making and action. This immediacy and call to action is often associated with authoritarianism as decision-makers (commanders) have to provide the answer to the problem instead of engaging in processes (management/tame problems) and discussion (leadership/wicked problems). Indeed, god-like decisiveness is required and any uncertainty will more than likely remain within the private sphere, unbeknownst to the public. However, it is important to note that critical problems can be framed as such by the participants/decision-makers/policymakers who seek for their own ends to strategically manipulate the action taken (Grint 13). Examples throughout history include the 1933 Enabling Act after the Reichstag Fire or the Gulf of Tonkin incident which resulted in the United States’ further involvement in the Vietnam War.
Would a nation state trust an AI-assisted policymaker in times of crisis such as the Cuban Missiles Crisis? Considering AI deals in the technical and rational, the latter of which could undoubtedly prove useful, it cannot at present think in creative ways to tackle to nuances, cultural inferences and indeed personalities of those involved.
Relationships and Power Structures
In our daily lives, the use of AI involves individuals putting their faith solely in its hands; one only has to think of the ways in which we use GPS navigation is trusted to a large extent wholeheartedly over our gut instincts and previous knowledge. As AI continues to gain importance, will there correspondingly be bigger leaps of faith and trust in AI and the services and processes it can provide?
The infamous compliance experiments of Stanley Milgram and Philip Zimbardo which drew attention to the ways in which most individuals, most of the time, comply with authority even if that subsequently leads to the infliction of pain upon others, innocent or otherwise, if they believe the rationale in question, a rationale that exempts them from taking responsibility for their actions, presents us with an increasing anxiety in the age of AI (Grint 21). If AI is presented as rational and scientific and, perhaps, is unbeknownst to us comprised with bias of the policymaker in question, will masses of the population blindly follow its instructions even if they cause harm to others?
Furthermore, power and the imbuement of this power onto others is the result of a relationship between leaders and followers based—a tenuous relationship based solely on the compliance and cooperation of the followers who at any moment could choose to resist the authority of their appointed leader (Grint 21). Where, then, does AI fit into this equation? Perhaps unsurprisingly, human beings are prone to attributing both success and failure in endeavours to individual leaders, the strength of which increases depending on the significance of these successes and/or failures. With the assistance of AI will this attribution lessen or increase? Will AI act as a scapegoat onto which the policymaker can project their failings? Will there be advocacy for the sole use of AI or the removal of it entirely?
At present, only time will tell the extent of the use of AI in matters of policy, globally and domestically and the most important issue, particularly following the recent Cambridge Analytica scandal and the impact of fake news on elections and referenda globally, is to ensure that AI is unbiased and devoid of malicious intent. Moreover, we need to ensure that AI is not blindly accepted, that its use is interrogated, discussed and debated to prevent the expansion of AI becoming a wicked and/or critical problem that hinders rather than helps. As the late Stephen Hawking once said:
“Computers will overtake humans with AI within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours” (Marwah).
“Learning from history.” Age of the Sage, http://www.age-of-the-sage.org/philosophy/history/learning_from_history.html
“The Turing Test: Alan Turing and the Imitation Game.” Psych.utoronto.ca, http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html
Brandon, John. “An AI god will emerge by 2042 and write its own bible. Will you worship it?” Venture Beat, 2 October, 2017, https://venturebeat.com/2017/10/02/an-ai-god-will-emerge-by-2042-and-write-its-own-bible-will-you-worship-it/
Chong, Zoey. “AI beats humans in Stanford reading comprehension test.” Cnet, 16 January, 2018, https://www.cnet.com/news/new-results-show-ai-is-as-good-as-reading-comprehension-as-we-are/
Galeon, Dom and Christianna Reedy. “Kurzweil Claims That the Singularity Will Happen by 2045.” Futurism, 5 October, 2017, https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/
Grint, Keith. “Wicked Problems and Clumsy Solutions: the Role of Leadership.” Clinical Leader, Volume I Number II, December 2008, pp. 10-25.
Marwah, Aman. “Artificial Intelligence Today.” IIMUN Blog, 13 December 2017, http://iimun.in/blog/index.php/2017/12/13/artificial-intelligence-today/
Tatly, Stephan. “What Will Our Society Look Like When Artificial Intelligence Is Everywhere?” Smithsonian, April 2018, https://www.smithsonianmag.com/innovation/artificial-intelligence-future-scenarios-180968403/