Small explosion in a field, as planes fly away.
Conflict & security, Technology & society, Munk School, Policy, Elections & Representation Lab

Munk School panel explores how information technologies affect the realities of war

Panelists Jon R. Lindsay and Janice Stein discussed the Ukraine conflict with moderator and SRI Associate Director Peter Loewen at the Munk School of Global Affairs and Public Policy; SRI Faculty Affiliate Avi Goldfarb co-authored a recent article with Lindsay, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.”

Panelists Jon R. Lindsay and Janice Stein discussed the Ukraine conflict with moderator and SRI Associate Director Peter Loewen at the Munk School of Global Affairs and Public Policy; SRI Faculty Affiliate Avi Goldfarb co-authored a recent article with Lindsay, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.”

How will advances in artificial intelligence (AI) reshape how conflicts unfold in the 21st century? Will new technologies result in wars fought by automated robots, with humans entirely absent from the picture? Will more powerful tools enable rapid and decisive victories, as nations armed with the latest tech dominate the theatre of global politics?

According to Jon R. Lindsay, an associate professor at the Georgia Institute of Technology who studies the impact of information technology on global security, none of these assumptions can be trusted. In fact, Lindsay argues, in many cases the opposite is true.

On April 22, 2022, the University of Toronto’s Munk School of Global Affairs and Public Policy and the Schwartz Reisman Institute for Technology and Society (SRI) hosted a discussion exploring the impact of new technologies of warfare, featuring commentary from Lindsay and Janice Stein, the Belzberg Professor of Conflict Management in U of T’s Department of Political Science and founding director of the Munk School. The event was moderated by Munk School Director and SRI Associate Director Peter Loewen.

 

In a recent article in International Security, Lindsay and SRI Faculty Affiliate Avi Goldfarb argue that while AI is able to accomplish many tasks formerly thought to be uniquely human, “it is not a simple substitute for human decision-making.” Rather, the authors contend that although advancements in machine learning have improved statistical prediction, “prediction is only one aspect of decision-making.” The proliferation of AI technologies therefore puts a premium on complementary elements that are essential for the decision-making process, such as the significance of quality data and the need for sound judgement—a skill in which humans still outperform machines. “If AI makes prediction cheaper for military organizations,” write Lindsay and Goldfarb, “then data and judgment will become both more valuable and more contested.”

As Lindsay observed in his opening remarks at the Munk School event, “There is a fear among governments that AI will be the fundamental driver of military power and national advantage in the future.” These fears can generate pressures to adopt AI systems quickly, a trajectory Lindsay describes as part of a broader history in his book, Information Technology and Military Power (2020).

For Lindsay, the social dimension of new technologies and a sense of continuity from the past are often more significant factors than a given technology’s level of sophistication. “You have to have the organizational context matched up with the strategic context,” noted Lindsay. “More often than not, we find that the very same systems that are designed to improve information and reduce uncertainty actually become new sources of uncertainty.”

Analyzing the use of information technologies in the ongoing crisis in Ukraine, Lindsay and Stein noted discrepancies between their current uses and depictions in popular culture. While advanced technologies have played essential roles for both sides in the conflict, their diffusion and impact do not follow the “myths, projections, and fantasies” depicted by tropes of autonomous robots and cyberwarfare, Lindsay observed. While AI may be absent from Ukrainian battlegrounds, the panelists noted several alternative contexts in which they are contributing in essential ways, including the use of cyberspace to sway public perception, and the leveraging of supply chain networks for Ukraine’s defense.

“IF AI MAKES PREDICTION CHEAPER FOR MILITARY ORGANIZATIONS,” WRITE LINDSAY AND GOLDFARB, “THEN DATA AND JUDGMENT WILL BECOME BOTH MORE VALUABLE AND MORE CONTESTED.”

Lindsay noted that despite Russian forces being previously considered by many as a cyber-warfare “powerhouse,” their invasion has been neither quick nor decisive, and is now an arduous war of attrition. Stein further observed that the use of small, cheap Turkish drones have been decisive in Ukrainian defense against the “clunky, old-fashioned approach” of Russian tanks, despite their superior capacity and investment. Both panelists also commented on the significance of intelligence data being revealed publicly, enabling third-party observers to source up-to-date information regarding active forces and casualties, and boosting the international community’s condemnation of Russia’s tactics due to public awareness of the atrocities being committed.

The discussion raised important questions regarding how different strategic contexts alter the role and significance of data, and where AI can be effectively applied—or not—towards national defense. For Lindsay, the notion AI can be applied everywhere is a myth: AI tools are most effectively deployed in administrative areas that are already clearly structured by organizational judgement. By contrast, areas of uncertainty—such as active conflicts—require levels of strategic judgement that can only be found in humans with the experience necessary for accurate insights. Despite the potentials of contemporary technologies, Lindsay observed, “our best theories of war are fundamentally grounded in uncertainty.”

Lindsay also noted that the complexity of AI systems can make coordination efforts more challenging, and not necessarily more efficient. This flaw can even be weaponized by adversaries: by targeting the integrity of data used by AI systems and utilizing data attacks to obfuscate and undermine sensors, the quality of data can be undermined to strategic benefit. Stein noted that, despite these factors of uncertainty, democracies have a “huge advantage” in applying new technologies, because they are structured to allow for open discussion which can help to overcome these challenges.

As the session made clear, AI will not be a substitute for humans anytime soon. Rather, human decision-makers—especially those with sufficient experience to possess insight and judgement amidst a wide range of uncertainties—will become even more important within an AI-enabled world.

Want to learn more?