Team 21
Ethics Homework
Read our Conversation


Ethics Chat


Welcome to The Smart Bet’s annual ethics chat. This week we’ll be addressing the Musk/Hawking Open Letter On Autonomous Weapons. In particular, we’ll be discussing the following topic: Is “a ban on offensive autonomous weapons beyond meaningful human control” going to work?

The transcript below has been lightly edited.

Begin - Tuesday, December 4th, 2018 at 11:00 PM

Tyler: Welcome everyone! Today’s topic is highly relevant to our IPS experience. Who wants to begin the conversation?

Brian: Sure, so autonomous weapons prevent quite a dilemma: on one hand, nobody wants to be standing down range from them, but on the other hand I’d say it vastly increases your safety factor if you are the one employing them. You can think about all the times in history that one civilization has wiped another because of superior technology. In this day in age, however, major civilizations are competing in a new kind of arms race: one driven by the power of technology. And while it may seem unjust to witness the use of these kinds of systems, how is it different if the actual power struggle between nations relies on their competing autonomous weapons--then it seems much more like a game of extreme chess, much more strategic than inhumane.

Tyler: I think that is a great point to get us started with, Brian. There definitely is a lot to unpack there, and one thing in particular I would like to hone in on is the inherent human aspect behind all of this. These weapons will only exist if humans develop them, will only be abused if humans abuse them, and will only kill if humans program them to kill. Humans kill humans, and I think that is an important thought to keep in mind throughout.

Kenneth: One argument I have heard in favor of autonomous robots is that they may help save the lives of soldiers and civilians. For example, sending autonomous robots out into the field could prevent human casualties in war. Further, because robots are to an extent disposable, they can be much more conservative in abiding by the rules of engagement. Do you think there is possibility for autonomous weapons to have a positive impact on warfare?

Tyler: Yes, absolutely, although I don’t think this is cut and dry. The arguments we have so far mentioned definitely outline some of the positives (saving lives), while there are a whole host of negatives, namely the ability to efficiently murder an incomprehensible number of people. I’d like to push the conversation a little bit down the road for a minute. Do you believe that banning autonomous weapons is a good idea? Is it feasible? I, for one, have my doubts.

Kenneth: I’m very skeptical of the feasibility of enforcing bans on autonomous weapons. Simple autonomous weapons are pretty cheap to build and relatively easy to duplicate. Having a ban on autonomous weaponry is likely to be incredibly difficult and may also have the downside of slowing the development of intelligent physical systems.

Brian: I’d say certain formats of bans are possible. Something like the nuclear weapons ban has been in effect for some time now, and while the US adheres to its regulations, it still maintains one of the largest nuclear arsenals in the world. Most of this was heightened by stressful times for the U.S., but a lot of it is out of a kind of subtle necessity--if they have it, it would be foolish to not keep some for yourself.

Tyler: Building directly off of that, how effective would you say the Nuclear Nonproliferation Treaty has even been? New countries, including those signed onto the treaty, are trying to acquire nuclear weapons every day. And nuclear weapons are something that is incredible complex, takes vast resources, and is difficult to hide. Imagine trying to enforce a ban on autonomous weapons, where all an unscrupulous nation needs to hide is some small drones and lines of code.

Kenneth: I think that the Nuclear Nonproliferation Treaty was definitely a step in the right direction. While there are still weapons being developed and moved between countries, I do not think we would have been better off without an explicit agreement about the dangers of nuclear weaponry. That said, I think due to the ease of production of armed autonomous systems, it may be better to compare them to bans on firearms.

Tyler: I like that connection, Kenneth. I think that is a really good point to make. There is a fundamental difference between cheap armed autonomous weapons and nuclear weapons. With that said, haven’t we seen throughout history that people get what they want? With an abundance of capable engineers around the world, it wouldn’t be too hard for a hostile regime to acquire this type of weaponry. I also don’t see countries like the U.S. giving up the autonomous weapons we already have (and yes, we do have them → i.e. sentry gun or Phalanx CIWS).

Brian: I think another thing to consider here is that the current military structure may not be willing to give up that much discretion. We are talking about weapons beyond reasonable human control. To a certain extent, I think the issue is somewhat self regulating in that individuals who pursue strong military action generally also desire strong control over their efforts. A “ban” to them may be taken more as an insurance policy: a) they maintain their position, and b) they don’t have to worry about being surpassed effortlessly by an adversary. Essentially, I feel that there is an innate desire to maintain control of one’s situation. While a treaty or letter may not carry any more weight here than any other situation, I think there is a general aversion to offloading tasks--especially on the scale of war--to be handled by autonomous systems.

Tyler: Clearly, there is a lot more that could be said here. It appears, however, that we are out of time for now. Thank you for reading this year’s edition of The Smart Bet’s ethics chat. We look forward to seeing you next year in ECE 9999: Autonomous Weapon Design.

End - Tuesday, December 4th, 2018 at 11:50 PM