American Go E-Journal

Go Spotting: The inscrutability of artificial intelligence in go… and nuclear warfare

Sunday October 6, 2019

In a September 7th article titled “Battle algorithm,” The Economist writes of a “paradox” that may be familiar to readers who analyze their games using Leela Zero and other AIs. “AI might at once penetrate and thicken the fog of war, allowing it to be waged with a speed and complexity that renders it essentially opaque to humans.” The article notes that in AlphaGo’s March 2016 victory over Lee Sedol, the AI “played several highly creative moves that confounded experts,” and this led a workshop at the Chinese Academy of Military Science to conclude that, in the words of one source, “an AI could create tactics and stratagems superior to those of a human player in a game that can be compared to a war-game.”

While the article in The Economist focuses on conventional warfare, the strengths and weaknesses of go-playing AIs also appear in recent publications on nuclear warfare.

In 2017, the American think tank RAND Corporation held a series of workshops on AI and nuclear war, which noted that AlphaGo’s victory “astonished even AI and strategy experts.” “[T]he decisionmaking in Go is far simpler to address than in nuclear war…. but by the year 2040, it does not seem unreasonable to expect that an AI system might be able to play aspects or stages of military wargames or exercises at superhuman levels.” It is “likely that humans making command decisions will treat the AI system’s suggestions as on par with or better than those of human advisers. This potentially unjustified trust presents new risks that must be considered.”

This year, an August 16 commentary by two American researchers also cites AlphaGo. The commentary notes that AlphaGo Zero “learned through an iterative process”; “in nuclear conflict there is no iterative learning process.” “The laws of war require a series of judgments…. Software that cannot explain why a target was chosen probably cannot abide by those laws. Even if it can, humans might mistrust a decision aid that could outwardly resemble a Magic 8-Ball.” Nonetheless, the commentary argues for having AI take more control over US nuclear weapons.

Thanks to Fred Baldwin for once again spotting go, this time in “Battle algorithm.”

-edited by Joe Cua