The European Parliament has set out its definitive position on the military uses of artificial intelligence (AI) in a newly adopted report. Noting that the technology can replace neither “human decision-making nor human contact”, it also insists on the need for an EU strategy to prohibit lethal autonomous weapon systems (LAWS).
“Faced with the multiple challenges posed by the development of AI, we need legal responses,” declared Gilles Lebreton, a Member of the European Parliament (MEP) from France and author of the new 18-page report, following its overwhelming adoption on 20 January by 364 in favour versus 274 against (with 52 abstentions).
“AI must always remain a tool used only to assist decision-making or help when taking action,” he said, adding that human operators “must be able to correct or disable it in case of unforeseen behaviour”.
His report stresses that human rights must be respected in all EU defence-related activities, with AI-enabled systems allowing humans “to exert meaningful control, so they can assume responsibility and accountability for their use”.
Moreover, it opines that the use of LAWS “raises fundamental ethical and legal questions on human control” and calls for their prohibition as well as a ban on so-called killer robots.
“The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity,” says the report, adding that the anthropomorphisation of LAWS should be prohibited “in order to rule out any possibility of confusing humans with robots”.