Apple-And-Microsoft-Computers

World Should Ban AI Weapons, Not AI

July 29, 2015
Stephen Hawking and Elon Musk have joined AI and robotics experts in a petition to ban intelligent killing machines. Is that a good thing?

Stephen Hawking and Elon Musk have joined AI and robotics experts in a petition to ban intelligent killing machines. Is that a good thing?

Physicist Stephen Hawking has a lot of theories. Some say he that he has a theory on everything.

While his classic work in mathematics has too many numbers (and even letters and weird emojis like β and Ø for some reason) for the masses, his newer hypotheses really capture the hearts and adrenal glands of science enthusiasts the world over.

In "Autonomous Weapons: an Open Letter from AI & Robotics Researchers," delivered to the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, he and fellow scientists posit that if humanity arms artificially intelligent machines, it may be nigh impossible to pry those guns from their cold, lifeless hands.

First there was his warning about trying to send messages into space:"If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans," Hawking posited in 2010.

If we suppose this to be true, does that mean Eliot was an '80s version of Pocahontas and E.T. was John Smith?

Maybe it's more like "To Catch a Predator," but the Dateline version, not the Arnie version.

Now Hawking, along with Tesla and SpaceX founder Elon Musk and Apple co-creator Steve Wozniak, have signed their names to the plea for common sense.

If this sounds a bit like the beginning of any sci-fi movie, that's because it is. What's really exciting for journalists like myself is that the industrial war complex never listens. The authors argue that automated weapons could be this century's Kalashnikovs, aka AK-47s, which have proliferated every war-mongering corner of the planet to no one's benefit.

Here's the major argument that most likely will be ignored:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

Personally, I'd feel safer with Skynet in charge of terminators than ISIS.

Now if developers manufactured these machines with something akin to Isaac Asimov's "Three Laws of Robotics," then maybe things would only reach iRobot levels of panic and destruction.

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even with logic and ethics in place, nothing is certain. If you can hack a car, someone will figure out how to hack a robot.

That's why it's truly best for the top minds in the fields of robotics and AI focus on industrial applications, to improve methods for manufacturing solar panels or medical devices. Think what Einstein and Oppenheimer could have achieved if not for the Manhattan Project. Arming robots is to scientists what Facebook is to Millennials: a massive time suck that steals their best years.

That's not to say working to make more independent AI robots should be avoided. The Industrial Internet of Things is a revolution that cannot be ignored nor should it. The experts themselves agree in the letter's conclusion: "AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so."

The important thing is that the discussion is out there, and the world still has a chance to choose world-building over nation-building.

Imagine intelligence machines armed with epoxy, not C4, going into war zones to help rebuild.

It sounds futuristic, but the technology is already in plants around the world.  Accidents still happen. The first robot killed a human in 1979 and most recently someone was killed at a German Volkswagen plant. If the focus is on using AI to make robots smarter and safer, then that's a sci-fi movie worth living in.

If you want to ask Stephen Hawking any questions about AI or aliens, be sure to visit the Reddit Q&A, going on now until Aug. 4.

About the Author

John Hitch | Editor, Fleet Maintenance

John Hitch, based out of Cleveland, Ohio, is the editor of Fleet Maintenance, a B2B magazine that addresses the service needs for all commercial vehicle makes and models (Classes 1-8), ranging from shop management strategies to the latest tools to enhance uptime.

He previously wrote about equipment and fleet operations and management for FleetOwner, and prior to that, manufacturing and advanced technology for IndustryWeek and New Equipment Digest. He is an award-winning journalist and former sonar technician aboard a nuclear-powered submarine where he served honorably aboard the fast-attack submarine USS Oklahoma City (SSN-723).