The Future of Life Institute has presented an open letter signed by over 1,000 robotics and artificial intelligence (AI) researchers urging the United Nations to impose a ban on the development of weaponized AI with the capability to target and kill without meaningful human intervention. The letter was presented at the 2015 International Conference on Artificial Intelligence (IJCAI), and is backed with the endorsements of a number of prominent scientists and industry leaders, including Stephen Hawking, Elon Musk, Steve Wozniak, and Noam Chomsky.
To some, armed and autonomous AI could seem a fanciful concept confined to the realm of video games and sci-fi. However, the chilling warning contained within the newly released open letter insists that the technology will be readily available within years, not decades, and that action must be taken now if we are to prevent the birth of a new paradigm of modern warfare.
Consider now the implications of this. According to the open letter, many now consider weaponized AI to be the third revolution in modern warfare, after gunpower and nuclear arms. However, for the previous two there have always been powerful disincentives to utilize the technology. For rifles to be used in the field, you need a soldier to wield the weapon, and this in turn meant putting a soldiers life at risk.
With the nuclear revolution you had to consider the costly and difficult nature of acquiring the materials and expertize required to make a bomb, not to mention the monstrous loss of life and international condemnation that would inevitably follow the deployment of such a weapon and the threat of mutually assured destruction (MAD). These deterrent factors have resulted in only two bombs being detonated in conflict over the course of the nuclear era to date.
The true danger of an AI war machine is that it lacks these bars to conflict. AI could replace the need to risk a soldier's life in the field, and its deployment would not bring down the ire of the international community in the same way as the launch of an ICBM. Furthermore, according to the open letter, armed AI drones with the capacity to hunt and kill persons independent of human command would be cheap and relatively easy to mass produce.
The technology will have the overall effect of making a military incursion less costly and more appealing, essentially lowering the threshold for conflict. Furthermore, taking the kill decision out of the hands of human being does by its nature remove the element of human compassion and a reasoning process which, at least in the foreseeable future, is unmatchable by a mere machine.
Another chilling aspect of weaponized AI tech that the letter highlights is the potential of such military equipment to make its way into the hands of despots and warlords who wouldn't think twice about deploying the machines as a tool to check discontent, or even perform ethnic cleansing.
“Many of the leading scientists in our field have put their names to this cause," says professor of Artificial Intelligence at the University of New South Wales (UNSW) and NICTA Toby Walsh. "With this Open Letter, we hope to bring awareness to a dire subject that, without a doubt, will have a vicious impact on the whole of mankind. We can get it right at this early stage, or we can stand idly by and witness the birth of a new era of warfare. Frankly, that’s not something many of us want to see. Our call to action is simple: ban offensive autonomous weapons, and in doing so, securing a safe future for us all.”
Source: The Future of Life Institute