The DIY Killer Robot

<p>Are current laws to prevent terrorists or rogue states getting hold of dangerous artificial intelligence technologies outdated? Professor Rain Liivoja (a senior lecturer at Melbourne Law School) believes so.</p>

Newsnonfiction

Current laws to prevent terrorists or rogue states getting hold of dangerous artificial intelligence technologies are outdated, according to a leading law expert.

Professor Rain Liivoja, a senior lecturer at Melbourne Law School, says that international export laws are becoming irrelevant with the proliferation of dual-use technologies.

He cites a YouTube video where a man installs a submachine gun on a drone.

“It’s pretty scary. Technologies used for dual-use purposes are rapidly expanding. The drone can be used for so many lawful purposes that it can be difficult to restrict its use. Nearly everything now can be used for military purposes.”

Professor Liivoja mentions the use of drones for landscape photography and farm surveillance. He says that restricting the particular transfer of military components is becoming less useful with the advent of 3D printing.

In July, over 1,000 artificial intelligence experts signed an open letter warning of the development of autonomous weapons. The letter was presented during the opening of the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, and was signed by the likes of professor Stephen Hawking, Apple co-founder Steve Wozniak, and Tesla’s CEO Elon Musk.

The letter states that the materials for manufacturing autonomous weapons will be cheap and easy to mass-produce, and will meet all the conditions necessary for a thriving black market.

The drone featured in the YouTube video is controlled by human hands, But Professor Liivoja says that autonomous weapons technology will soon be easily obtainable.

“We are already at the stage where you can source components for chemical and biological weapons on the open market. Why should autonomous weapons technology be any different?”

He suggests that any ban on autonomous artificial intelligence will be ineffective, adding that there is no way of effectively controlling the underlying technology.

Tim McFarland, a PhD student specialising in law and engineering at Melbourne University, agrees: “A lot of the systems used to build autonomous weapons are the same parts used to build an ordinary computer. I think regulating this field will be extremely difficult.”

But Mr McFarland is also concerned about accountability. He says that there is an assumption in criminal law that requires a perpetrator – “one that selects a target and pulls a trigger”. With the use of autonomous weapons, it becomes difficult to apply the same set of criminal codes. It’s a legal area that does not yet exist: What happens when an autonomous weapon is responsible for a human death?

With the rapid development of artificial intelligence, both researchers hope that the legal onus will shift from the individual operator of a system to the programmers and engineers that design it.

“It comes down to product liability,” says Professor Liivoja, “as with cars, elevators or microwaves, manufacturing defects or technical oversights will lead to abnormal performance. I think product liability laws are becoming increasingly relevant in the military sphere. It comes down to whether or not criminal law can accommodate this shift.”

The stakes are high. The open letter describes autonomous weapons as the “third revolution in warfare”, and claims that they will become the “Kalashnikovs of tomorrow” if any military power provokes an artificial intelligence arms race.

But the news is not all bad. Both men agree that autonomous artificial intelligence is at a stage much more technologically primitive than the media would lead us to believe.

Denny Oetomo, a senior lecturer in Mechanical Engineering at Melbourne University, concurs:

“I think the apocalyptic fear is exaggerated. When technology is not managed appropriately, it is possible that damages will result from it – and that is not a new point.”

He says we should always be concerned about humans carrying out irresponsible acts, regardless of the development of new technologies.

“I should be as worried about an irresponsible company or robot designer who intentionally creates a machine to harm a human being as I would about getting robbed and attacked in the middle of the street.”

 
You may be interested in...
There are no current news articles.