Mindhacks has an interesting article about the use of robots in war. We know the U.S. is using pilotless drones to attack suspected terrorists in the mountain range between Afghanistan and Pakistan. This can save lives and presumably there are technological capabilities that are impossible for a human to replicate. But the possibility of human error is replaced by the possibility of computer error and, Mindhacks points out, even lack of robot predictability.
I went to a military operations research conference to present at a game theory session. Two things surprised me. First, game theory has disappeared from the field. They remember Schelling but are unaware that anything has happened since the 1960s. Asymmetric information models are a huge surprise to them. Second, they are aware of computer games. They just want to simulate complex games and run them again and again to see what happens. Then, you don’t get any intuition for why some strategy works or does not work or really an intuition for the game as a whole. And what you put in is what you get out: if you did not out in an insurgency movement causing chaos then it’s not going to pop out. This is also a problem for an analytical approach where you may not incorporate key strategic considerations into the game. Cliched “Out-of the-Box” thinking is necessary. Even a Mac can’t do it.
So, as long as there is war, men will go to war and think about how to win wars.
(Hat tip: Jeff for pointing out article)