Russian President Vladimir Putin last week poked the nest of anxieties over the use of artificial intelligence to gain power in a video address to students at 16,000 selected schools.
“Artificial intelligence is the future, not only for Russia but for all humankind,” he said. “Whoever becomes the leader in this sphere will become the ruler of the world.”
Russia isn’t alone in viewing AI as the next big thing in military arsenals. Both the United States and China see AI as a key to success in armed conflicts of the future.
Tesla CEO Elon Musk was among those who weighed in after Putin made his remarks. Musk has been an outspoken critic of autonomous weapons controlled by AI.
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
— Elon Musk (@elonmusk) September 4, 2017
The stakes are high for nations developing AI weapons, wrote Gregory C. Allen, an adjunct fellow at the Center for a New American Security, in a commentary for CNN.
AI could spark a revolution in military technology comparable to the invention of aircraft and nuclear weapons, he suggested.
Russian Breakthroughs Doubtful
Despite its ambitions, Russia isn’t likely to rise to AI dominance through technology breakthroughs, Allen maintained, because China and the United States have digital technology industries that are larger and growing faster than Russia’s.
That said, Russia could play a lead role in weaponizing AI as part of its effort to depose the U.S. as the leading global power and restore Russian dominance over Soviet territory, he wrote.
Even though Russia is not known as an Internet technology powerhouse, the country has an impressive crew of hackers that have knocked out a large chunk of Ukraine’s power grid, infiltrated U.S. nuclear plants, and wreaked havoc in the 2016 U.S. elections, Allen pointed out.
Nevertheless, Putin’s initiative to send a unified message about artificial intelligence to some 1 million students is a kind of call to action that’s missing in the West, observed Sam Curry, chief security officer for Cybereason.
“We’re doing AI ad hoc,” he told TechNewsWorld. “We don’t have a big enough vision. We need some big moonshot ideas to rally us.”
Economic Battlefield
The advantages autonomous weapons could provide on the battlefield have yet to be established.
“Although there have been calls to limit development in that space, it’s not clear to me you’ll have a huge amount of military superiority just because you have some automated weapons,” said Daniel Castro, director of the Center for Data Innovation.
“It’s hard to have military victories in many environments, and I don’t think AI is going to change that,” he told TechNewsWorld.
The real battles for AI dominance will be fought among economies, Castro maintained.
“On the economic side, there is a clear race for dominance,” he said.
“When you look at the impact of AI in different industries going forward, we can see the technology is going to have a transformative impact,” Castro pointed out. “There’s going to be a direct corollary between a country’s success with AI and economic power.”
Dominance Difficult to Achieve
While success with AI may contribute to a nation’s standing in the global community, whether it’s a ticket to dominance may be another matter.
“AI is going to play a significant role in how future technology is leveraged and how decision-making is made, but it’s not clear to me that one country will rule on this,” said Bill Welser, director of the engineering and applied sciences department atRand Corporation.
“Even if a country did gain an advantage, it’s unclear it would be able to block technological diffusion from occurring very quickly,” he told TechNewsWorld.
“What we’re seeing now is, as an application base appears in one sector or location, it quickly flows to others — so there’s a very short first-mover advantage,” Welser explained.
People should be concerned about totally replacing humans in the decision-making process to fire weapons, but “we’re pretty far away from that,” he said.
“What we should be more concerned about now is bias by coders of the AI systems,” Welser suggested. “Those biases, even without removing humans out of the decision-making loop, could create unwanted outcomes.”
Smart Pandora’s Box
Putin’s remarks have added gasoline to burning concerns over the use of AI in war machines. Those concerns have prompted the United Nations to begin discussions about weapons that incorporate AI into their operations — weapons such as drones, tanks and automated machine guns.
Russia already has taken steps in that direction. Its Military Industrial Committee in 2015 launched an ambitious research and development program to make 30 percent of the country’s combat power consist either of remotely controlled or robotic platforms by 2025.
The prospect of an AI-powered weapons arms race prodded more than 100 technology specialists from 26 countries, including Musk and Mustafa Suleyman, an exec at Google’s parent company Alphabet, to call for a ban on autonomous weapons.
“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the group wrote to the UN in August.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” the letter continues. “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”