Human rights activists around the globe have frequently condemned drone strikes carried out by the U.S. However, behind this condemnation is the implicit fear that drones may one day advance to the point where they become autonomous in carrying out strikes. Pentagon has now assured that such a thing is not going to happen.
Given the pace at which artificial intelligence used in robots is advancing, the day when robots are able to comprehend huge amounts of information in real-time is not far. And thus, many have cited the fear that one day, robots used for military purposes may be allowed to carry out autonomous decisions when it comes to killing their targets.
However, Pentagon has now signed a series of instructions which are going to ensure that whenever a robot carries out a strike, it will be guided by a human at the back-end. Thus, this human would deploy his judgment to see how important the strike is and will be accountable for his decision later.
The document vows to minimize the possibility of any such event where the failures of an autonomous or semi-autonomous robot may result in unintended engagement. The instructions delve into the details of the hardware as well as the software of a military robot, giving directives as to how to implement proper controls at the design stage and later on.
This move by Pentagon comes at a very significant time. Only recently, Human Rights Watch had cited concerns that the increasing drone autonomy in carrying our raids may eventually lead to the robots pulling the trigger on their own. Such scenarios would pose grim questions as to who should be held accountable for a wrong strike.
The new instructions dished out by Pentagon are meant to appease the activists and assure them that robots are not going to kill on their own, at least not yet.