You are currently browsing the tag archive for the ‘war’ tag.
India has proposed a new round of talks with Pakistan. The last meaningful talks in 2007 led to a thawing of relations and real progress till everything was brought to a grinding halt by the terrorist attacks in Mumbai.
What are the payoffs and incentives for the two countries? David Ignatius ar the Washington Post offers this analysis:
“The India-Pakistan standoff is like one of those game-theory puzzles where both nations would be better off if they could overcome suspicions and cooperate — in this case, by helping the United States to stabilize the tinderbox of Afghanistan. If Indian leaders meet this challenge, they could open a new era in South Asia; if not, they may watch Pakistan and Afghanistan sink deeper into chaos, and pay the price later.”
The quote offers a theory for how India might gain from peace but what about Pakistan? Pakistan cannot be treated as a unitary actor. Some part of the elite and perhaps even the general population may gain from an easing of tension and a permanent peace with India. But the Pakistani military has quite different interests. The military dominate Pakistan politically and economically. Their rationale for resources, power and prestige relies on perpetual war not perpetual peace. Sabotage is a better strategy for them than cooperation with India. The underlying game is not the Prisoner’s Dilemma.
Military payoffs have to be aligned with economic payoffs to encourage cooperation. Economic growth can also generate the surplus to bankroll a bigger army. A poor country needs the threat of war to divert valuable resources into defense. But a rich country does not.
The hypothetical “ticking time-bomb” scenario represents a unique argument in favor of torture. There will be a terrorist attack on Christmas day and a captive may know where and by whom. Torture seems more reasonable in this scenario for a few reasons.
- It’s a clearly defined one-off thing. We can use torture to defuse the ticking time-bomb and still claim to have a general policy against torture except in these special cases.
- The information especially valuable and verifiably so.
- There is limited time.
If we look at torture simply as a mechanism for extracting information, in fact reasons #1 and #2 by themselves deliver at best ambiguous implications for the effectiveness of torture. A one-off case means there is no reputation at stake and this weakens the resolve of the torturer. The fact that the information is valuable means that the victim also has a stronger incentive to resist. The net effect can go either way.
(Keep in mind these are comparative statements. You may think that torture is a good idea or a bad idea in general, that is a separate question. The question here is whether aspects #1 and #2 of the ticking time-bomb scenario make it better.)
We would argue that a version of #3 is the strongest case for torture, and it only applies to the ticking time-bomb. Indeed the ticking time-bomb is unique because it alters the strategic considerations. A big problem with torture in general is that its effectiveness is inherently limited by commitment problems. If torture leads to quick concessions then it will cease quickly in the absence of a concession (but of course continue once a concession has revealed that the victim is informed ). But then there would be no concession. And as we wrote last week, raising the intensity of the torture only worsens this problem.
But the ticking time-bomb changes that. If the bomb is set to detonate at midnight then torture is going end whether he confesses or not. Now the victim faces a simple decision: resist torture until midnight or give up some information. The amount of information you can get from him is limited only by how much pain you are threatening. More pain, more gain.
Sandeep and I are writing a paper on torture. We are trying to understand the mechanics and effectiveness of torture viewed purely as a mechanism for extracting information from the unwilling. A major theme we are finding is that torture is complicated by numerous commitment problems. We have blogged about these before. Here is Sandeep’s first post on torture which got this whole project started.
A big problem is that torture takes time and when the victim has resisted repeated torture it becomes more and more likely that he actually has no information to give. At this point the torturer has a hard time credibly commiting to continue the torture because in all likelihood he is torturing an innocent victim. This feeds back into the early stages of the torture because it increases the temptation for the truly informed victim to resist torture and pretend to be uninformed.
In light of this it is possible to say something about the benefits of adopting more and more severe forms of torture, waterboarding say. A naive presumption is that a technology which delivers suffering at a faster pace would circumvent the problem because it makes it harder to resist temptation for long enough.
But this logic is backwards. Indeed, if it were true that more severe torture induced the informed to reveal their information early, then this would only hasten the time at which the torture ceases because the torturer becomes convinced that his heretofore silent victim is in fact innocent. So credible torture requires that those who resist the now more severe torture must find compensation in the form of less information revealed in the future. In the end the informed victim is no worse off and this means that the torturer is no better off.
Once you account for that what you are left with is that there is more suffering inflicted on the uninformed who has no alternative but to resist. And this only makes it more difficult to continue torturing once the victim has demonstrated he is innocent. That is, the original commitment problem is only made worse.
Government organizations often compete not cooperate. They compete for funding from the central government and if say the C.I.A. succeeds in some task and the N.C.T.C. does not, money, status, access etc. might move naturally towards the former from the latter. If the N.C.T.C. helps the C.I.A. catch a terrorist, ironically, their own hard work is punished. On the other hand, competition helps to give the bureaucracies the incentive to work hard. That is, the positive effect that must be counterbalanced against the negative effect on incentives to cooperate. What is the optimal incentive scheme?
This seems like a pretty important question and someone has studied an important part of it. The classic paper is Hideshi Itoh’s Incentives to help in Multi-Agent Situations.
Suppose the marginal cost of helping is zero at zero effort of helping. Then, if one agent’s help reduces the other’s marginal cost of effort at his main task, it is optimal to incentivize teamwork. How do you do that? One agent has to be paid when the other succeeds. The assumptions that efforts are complements and that the marginal cost of help is zero at zero do not seem to be a big stretch in the present circumstances. The benefits of greater competition, lower resource costs, must be traded off against the costs, less cooperation and hence more chance of a successful terrorist attack if “dots are not connected” across organizations.
Itoh also shows that if the marginal cost of helping is positive at zero help, the optimal scheme either involves total specialization or, more surprisingly, substantial teamwork. This is because giving agents the incentive to help each other just a little is very costly, given the cost condition. So, if you are going to incentivize teamwork at all, it is optimal incentivize large chunks of it. If the benefits of catching terrorists is large, this logic also pushes the optimal scheme towards teamwork.
With much information classified, it is impossible to know how much intra-bureaucracy competition contributed to intelligence failure. But whether it did not or not, it is worth ensuring that good mechanisms for cooperation are in place.
The safeguards that are employed in airport security policy are found using the “best response dynamic”: Each player chooses the optimal response to their opponent’s strategy from the last period. So, the T.S.A. best-responds to the shoe bomber Richard Reid and a terrorist plot to blow up planes with liquid explosives. We end up taking our shoes off and having tiny tubes of toothpaste in Ziplock bags. So, a terrorist best-responds by having a small device divided into constituent parts and hidden in his underwear. One part has to be injected into another via a syringe and the complications that ensue prevent the successful detonation of the bomb. In this sense, each player is best-responding to the other and the airport security policy, by making it a bit harder to carry on a complete bomb, succeeded with a huge dose of good luck thrown in.
What should we learn from the newest attempt to blow up an airplane?
First and most obviously, the best way to minimize the impact of terrorism is to stop terrorists before they can even get close to us. This appears to be the main failure of security policy in the recent incident – more focus on intelligence and filtering of watch lists is vital. Second, the best response dynamic should not be the only way to inform policy. There are already rumors that no-one will be allowed to walk around for the last hour of the flight or have personal items on their lap. Terrorists will respond to these policies by blowing up planes earlier in flight. Does that make anyone feel any safer or the terrorists less successful? The main problem is that terrorists are thinking up new schemes to get to nuclear power stations, kidnap Americans abroad and other horrible things that should being brainstormed and pre-empted. The best response dynamic is backward looking and cannot forecast these problems or their solutions. This second point is also obvious. The fact that a boy whose father turned him in got on a plane with a bomb suggests that even obvious points are worth making.
If this is all obvious, forgive me I came late to this (I grew up in Orange County, CA where it last snowed in December of Yeah Right.)
The first thing to do, obviously is to make a snowball. Your enemy combatant will do the same. You each now have one snowball in your stockpile. What next?
If you throw your snowball you will be unarmed and certain to pay the consequences. So you don’t. Neither does she. You are at a standoff, but very soon you figure out what to do while you wait for the standoff to resolve. Make another snowball. Of course she does the same.
Now you each have an arsenal of two snowballs. Two is very different from one however because if you throw your snowball you still have one to defend yourself with. But you will have one fewer than she. This still puts her at an advantage because once you use your last snowball you are again unarmed. So you will only throw your first snowball if you have a reasonable chance of landing it.
The alternative is to make another snowball. Which of these is the better option depends on what she is expecting. If she knows you will throw, she is prepared to dodge it and then press her advantage. If she knows you will make another one she will wait for you to reach down into the snow when you are most vulnerable and she will draw first blood.
So you have to randomize. So does she. There are two possible outcomes of these independent randomizations. First, one or two snowballs may fly resulting in a sequence of volleys which eventually deplete your stocks down to one or two snowballs left. The second possibility is that both of you increase your stockpile by one snowball.
Thus, equilibrium of a well-played snowball fight gives rise to the following stochastic process. At each stage, with a certain positive probability, the stockpiles both increase by one snowball. This continues without bound until, with the complementary probability in each stage, a fight breaks out depleting both stockpiles and beginning the process again from zero.
Special mention should be made of a third strategy which is to be considered only in special circumstances. Rather than standing and throwing, you can charge at her and take a shot from close range. This has the obvious advantages but clearly leaves you defenseless ex post. Running away should be ruled out because you will be giving up your entire store of snowballs and eventually you will have to come back. No, the only option at this point is to tackle her, landing you both deep in the snow. With the right adversary, this mutually assured destruction could be the best possible outcome.
From the fantastic blog, Letters of Note.
Circa 1986, Jeremy Stone (then-President of the Federation of American Scientists) asked Owen Chamberlain to forward to him any ideas he may have which would ‘make useful arms control initiatives’. Chamberlain – a highly intelligent, hugely influential Nobel laureate in physics who discovered the antiproton – responded with the fantastic letter seen below, the contents of which I won’t mention for fear of spoiling your experience. Unfortunately, although I can’t imagine the letter to be anything but satirical, I’m uninformed when it comes to Chamberlain’s sense of humour and have no way of verifying my belief. Even the Bancroft Library labels it as ‘possibly tongue-in-cheek’.
According to the Times:
“A 2008 firefight in eastern Afghanistan has become a template for how not to win there, and helps to explain the strategy of Gen. Stanley A. McChrystal, the new commander.”
A new study is being released of the fire-fight. Among other things it says:
“Before the soldiers arrived, commanders negotiated for months with Afghan officials of dubious loyalty over where they could dig in, giving militants plenty of time to prepare for an assault.
Despite the suspicion that the militants were nearby, there were not enough surveillance aircraft over the lonely outpost — a chronic shortage in Afghanistan that frustrated Defense Secretary Robert M. Gates at the time. Commanders may have been distracted from the risky operation by the bureaucratic complexities of handing over responsibility at the brigade level to replacements — and by their urgent investigation of an episode that had enraged the local population, the killing a week earlier in an airstrike of a local medical clinic’s staff as it fled nearby fighting in two pickup trucks.
Above all, the unit and its commanders had an increasingly tense and untrusting relationship with the Afghan people.”
As far as I see from the story in the Times, the report on the firefight examines the mistakes that were made in implementation of a strategy. There are lots nuances but basically the army unit was meant to set up an outpost and they were not given the intelligence and manpower to do their job effectively. In other words this was a failure of “operations management” or what we might call tactics. We can learn from it in terms to how not to make the same mistake again.
What we cannot learn from it is what our strategy should be in Afghanistan. A strategy here is broader than what we usually mean in game theory. It is a description of an objective function as well as a plan of how to maximize it. (The objective function for Afghanistan is part of a grander objective function for U.S. domestic and foreign policy.) It will suggest questions such as: Should we ensure a stable democracy in Afghanistan? Or should we focus on Al Qaeda? Or have we overreacted to 9/11 overall and we should leave? None of this is answered by the study of the incident in 2008. Hence, we would be wrong to extrapolate from an issue of tactics to an issue of overall strategy. Maybe we decide we do not want outposts at all. Then a counterinsurgency strategy is moot.
Wired reports that the Soviet Union actually had a doomsday device and kept it a secret.
“The whole point of the doomsday machine is lost if you keep it a secret!” cries Dr. Strangelove. “Why didn’t you tell the world?” After all, such a device works as a deterrent only if the enemy is aware of its existence. In the movie, the Soviet ambassador can only lamely respond, “It was to be announced at the party congress on Monday.”
So why was the US not informed about Perimeter? Kremlinologists have long noted the Soviet military’s extreme penchant for secrecy, but surely that couldn’t fully explain what appears to be a self-defeating strategic error of extraordinary magnitude.
The silence can be attributed partly to fears that the US would figure out how to disable the system. But the principal reason is more complicated and surprising. According to both Yarynich and Zheleznyakov, Perimeter was never meant as a traditional doomsday machine. The Soviets had taken game theory one step further than Kubrick, Szilard, and everyone else: They built a system to deter themselves.
By guaranteeing that Moscow could hit back, Perimeter was actually designed to keep an overeager Soviet military or civilian leader from launching prematurely during a crisis. The point, Zheleznyakov says, was “to cool down all these hotheads and extremists. No matter what was going to happen, there still would be revenge. Those who attack us will be punished.”
The logic is a tad fishy. But it is not obvious that you should reveal a doomsday device if you have one. It is impossible to prove that you have one so if it really had a deterrent effect you would announce you have one even if you don’t. So it can’t have a deterrent effect. And therefore you will always turn it off.
What you should worry about is announcing you have a doomsday device to an enemy who previously was not aware that there was such a thing. It still won’t have any deterrent effect but it will surely escalate the conflict. (via free exchange via Mallesh Pai.)
Jeff Miron writes
If the CIA had convincingly foiled terrorists acts based on information from harsh interrogations, the temptation to shout it from the highest rooftops would have been overwhelming.
Thus the logical inference is that harsh interrogations have rarely, if ever, produced information of value.
Without taking a stand on the bottom-line conclusion, I wonder about the intermediate claim. If, for example, the CIA can document that torture produced critical intelligence, when would be the optimal time to release that information? There are many reasons to wait until an investigation is already underway.
- If it was already in the public record, that would be in effect a sunk-cost for prosecutors and have less effect on marginal incentives to go forward.
- Public information maximizes its galvanizing effect when the public is focused on it. Watercooler conversations are easier to start when it is common-knowledge that your cubicle-neighbor is paying attention to the same story you are.
- Passing time make even public information act less public. Again, its not the information per se, but the galvanizing effect of getting the public focused on the same facts. Over time these facts can be spun, not to mention simply forgotten.
I expect that the success stories are there as a kind of poison pill against the investigators. They will reach a point where any further progress will require that the positive results will come to light.
In a frightening new paper, Philip Munz, Ioan Hudea, Joe Imad, and Robert J. Smith say NO! It’s such scary news that the BBC covered it.
In their model, Susceptible (S) humans can turn into Zombies (Z) with probability β if they meet each other. But Zombies can also rise from dead susceptibles or the so-called Removed R at rate ς. In a mixed population with no birth, S will definitely shrink. Even if S kill Z at rate α, Z can always re-appear from R and never die off. Hence, we end up in a pure Zombie equilibrium. There is no channel for S to grow and there is a channel for Z to grow and there you have it.
Of course, if there is birth then things change. In their model, the authors look at the case where the (exogenous) birth rate Π is zero. But the birth rate should also depend on the fractions of S and Z in the population. If S is large then there should be frequent S-S encounters. Assume away gender issues for simplicity and these S-S encounters should lead to progeny. Even if the birth rate is low, it is multiplies by S-squared the chance of an S-S meeting while the zombie production rate βSZ + ςR is close to ςR if Z is close to zero. If S is large, so ΠS > ςR, this stabilizes a good S equilibrium where a small fraction of zombies does not eventually take over.
This is a small trivial extension but with a good title (“Make Love to win the Zombie War”), it would be an interesting sequel.
There is another solution: cremation is better than burial. I’m not an expert on zombies but I strongly suspect a cremated body cannot reappear in zombie form. Then, if we can kill of zombies fast enough (high α), we should be fine. Phew. But while the human race is safe, all individuals are in danger. I will not sleep well tonight.
(Hat Tip: PLL)
This is a companion to our Prisoner’s Dilemma Everywhere series.
Bill Clinton just returned from North Korea with the two American journalists who were being held there. Kim Jong-il got his face time with Bill and the U.S. got two citizens back without sanctions or a war. Win-win as we say in business schools?
No, says John Bolton, former Ambassador to the U.N. The previous stand-off was doing no-one any good. Obviously it was bad for the U.S. but it was also bad for North Korea. Possible sanctions might have made it hard for the goodies the elite loves to make it into North Korea. So, the Clinton-Jong-il meeting dominates the previous situation. But Bolton has an even better situation in mind: Jong-il simply hands over the journalists without us even giving him a face-saving meeting. We threaten them with something (war? sanctions?) and this is enough to give them the incentive to cooperate without us having to give up anything at all. Some might argue we are pretty close to this equilibrium as a “threat of sanctions plus Clinton visit” amounts to gain for very little pain?
Whatever the empirical judgements are, the theory is clear – Bolton sees the game as Chicken:
A few weeks ago, Israeli warships and a nuclear submarine went through the Suez Canal. Israel is signaling that it can come within firing distance of Iran easily:
Israeli warships have passed through the [Suez] canal in the past but infrequently. The recent concentration of such sailings plainly goes beyond operational considerations into the realm of strategic signalling. To reach the proximity of Iranian waters surreptitiously, Israeli submarines based in the Mediterranean would normally sail around Africa, a voyage that takes weeks. Passage through the Suez could take about a day, albeit on the surface and therefore revealed. The Australian
There is a second signal: (Sunni) Egypt is on board with Israel’s focus on preventing the arrival of a nuclear-armed (Shia) Iran. Even Saudi Arabia is alarmed by the by the growth in the power and influence of its neighbour:
Egypt and other moderate Arab countries such as Saudi Arabia have formed an unspoken strategic alliance with Israel on the issue of Iran, whose desire for regional hegemony is as troubling to them as it is to the Jewish state. There were reports in the international media that Saudi Arabia had consented to the passage of Israeli warplanes through its air space in the event of an attack on Iran’s nuclear facilities but both Riyadh and Jerusalem have denied it. . The Australian
International politics makes for strange bedfellows.
1. Bargaining Power of Pirates
Often we know about a ship’s cargo, owners and port of origin before we even board it. That way we can price our demands based on its load. For those with very valuable cargo on board then we contact the media and publicize the capture and put pressure on the companies to negotiate for its release.
2. Bargaining Power of Foreign Negotiators
Armed men are expensive as are the laborers, accountants, cooks and khat suppliers on land. During long negotiations our men get tired and we need to rotate them out three times a week. Add to that the risk from navies attacking us and we can be convinced to lower our demands.
3. Intensity of Competitive Rivavlry
The key to our success is that we are willing to die, and the crews are not.
4. The Value of Hostages
Hostages — especially Westerners — are our only assets, so we try our best to avoid killing them. It only comes to that if they refuse to contact the ship’s owners or agencies. Or if they attack us and we need to defend ourselves.
5. The Threat of the Navy
Whenever we reach an agreement for the ransom, we send out wrong information to mislead the Navy about our exact location. We don’t want them to know where our land base is so that our guys on the ship can manage a safe escape. We have to make sure that the coast is clear of any navy ships before we leave. That said, there is no guarantee that we won’t be shot or arrested, but this has only happened once when the French Navy captured some of our back up people after the pirates left the Le Ponnant.
Mindhacks has an interesting article about the use of robots in war. We know the U.S. is using pilotless drones to attack suspected terrorists in the mountain range between Afghanistan and Pakistan. This can save lives and presumably there are technological capabilities that are impossible for a human to replicate. But the possibility of human error is replaced by the possibility of computer error and, Mindhacks points out, even lack of robot predictability.
I went to a military operations research conference to present at a game theory session. Two things surprised me. First, game theory has disappeared from the field. They remember Schelling but are unaware that anything has happened since the 1960s. Asymmetric information models are a huge surprise to them. Second, they are aware of computer games. They just want to simulate complex games and run them again and again to see what happens. Then, you don’t get any intuition for why some strategy works or does not work or really an intuition for the game as a whole. And what you put in is what you get out: if you did not out in an insurgency movement causing chaos then it’s not going to pop out. This is also a problem for an analytical approach where you may not incorporate key strategic considerations into the game. Cliched “Out-of the-Box” thinking is necessary. Even a Mac can’t do it.
So, as long as there is war, men will go to war and think about how to win wars.
(Hat tip: Jeff for pointing out article)
The stakes are formidable. Experts estimate that contraband accounts for 12 percent of all cigarette sales, or about 657 billion sticks annually. The cost to governments worldwide is massive: a whopping $40 billion in lost tax revenue annually. Ironically, it is those very taxes — slapped on packs to discourage smoking — that may help fuel the smuggling, along with lax enforcement and heavy supply. (A pack of a leading Western brand that costs little more than $1 in a low-duty country like Ukraine can sell for up to $10 in the U.K.) That potential profit offers a strong incentive to smugglers.
I have argued that legalization of marijuana would not ease the drug war, and might even intensify it. This series of articles about black market tobacco provides a possible preview of the incentives that would be created by a regulated and taxed market for marijuana. Legalization may just replace the current war on drugs with a battle to protect tax revenues on legal marijuana and to protect monopoly power by legitimate producers.
In sync with increased regulation and taxes on tobacco in recent years, the black market has thickened.
Yet, despite the exposés, the lawsuits, and the settlements, the massive trade in contraband tobacco continues unabated. Indeed, with profits rivaling those of narcotics, and relatively light penalties, the business is fast reinventing itself. Once dominated by Western multinational companies, cigarette smuggling has expanded with new players, new routes, and new techniques. Today, this underground industry ranges from Chinese counterfeiters that mimic Marlboro holograms to perfection, to Russian-owned factories that mass produce brands made exclusively to be smuggled into Western Europe. In Canada, the involvement of an array of criminal gangs and Indian tribes pushed seizures of contraband tobacco up 16-fold between 2001 and 2006.
Salakot salute: Terry Gross.
I guess I am the Tyrone Slothrop of Northwestern University. I’ve been doing research on the theory of the “democratic peace” – the finding that democracies rarely attack each other. This has been called “an empirical law” in international relations. This idea is famous enough that it is offered as a rationalization for spreading democracy by both left- and right-wing politicians.
Why might democracies be more peaceful? And how about a regime like Iran? Fareed Zakaria says : “Iran isn’t a dictatorship. It is certainly not a democracy.” It is something in the middle. There are elections but an elite also controls many things such as the appointment of the Supreme Leader who has enormous power.
I have done some research with David Lucca and Tomas Sjostrom where we offer a theory for why these regimes which we call limited democracies might be the most warlike of all. And the data does suggest that countries like Iran are very warlike, especially when facing a similar limited democracy.
Here is brief attempt to explain the theory informally – it is done using game theory in the paper. Conflict occurs via combination of greed and fear – two of the causes of war according to the great Greek historian Thucydides. Each side does not know if the other is motivated by greed or fear. Greedy leaders are hawkish. But, even if one side is not greedy, they turn aggressive because the other side may be greedy. So, both sides become aggressive whether it is because of greed or fear of greed. We study how political institutions can control greed or stimulate fear.
In fact, the logic above is our model of dictatorship where leaders interact with no thought for the wishes of their citizens. It is our pure model of greed and fear. It is inspired by the famous logic of the “reciprocal fear of surprise attack” due to Thomas Schelling.
In a democracy, the voters may punish a leader who starts a war unnecessarily. As leaders want to stay in power, this controls greed. But the voters may also punish a leader who is weak in the face of aggression. This unleashes fear as democratic leaders are aggressive in case they are too dovish in an aggressive environment. So, democracies can be peaceful against each other as dovish voters control their leaders. But they can turn aggressive very rapidly if they are concerned their opponent will be aggressive. In a dictatorship, the leader does not fear losing power but no-one controls his greed.
Now, suppose the leader can survive in power if he pleases the voters or if he satisfies a hawkish minority who favor war. This regime has some properties of a democracy – the leader survives in power in the same scenarios as the leader of a full democracy. But he also survives if he starts an unnecessary war – just like a dictator would. The leader only loses power if he is dovish in the face of aggression. Then, neither the average citizen nor the hawks support him. This type of regime which we call a limited democracy is the most aggressive of all. The leader fears losing power and the voters cannot control his greed. So, a little democracy can make things worse if it leads to a regime like this.
The theory leads to a bunch of predictions which we try to confirm in data. I took a shot at explaining the ideas in a talk I gave to Kellogg MBAs. The video is here in case you’re interested (you need Real Player to view it). The article is here (you need Adobe Acrobat to view it).
You take all of the conflict, all of the chaos, all of the noise, and out of that comes this precise mathematical distribution of the way attacks are ordered in this conflict. This blew our mind. Why should a conflict like Iraq have this as its fundamental signature? Why should there be order in war? We didn’t really understand that. We thought maybe there is something special about Iraq. So we looked at a few more conflicts. We looked at Colombia, we looked at Afghanistan, and we looked at Senegal.
See the TED talk. (hat tip: The Browser)
Sandeep has previously blogged about the problems with torture as a mechanism for extracting information from the unwilling. As with any incentive mechanism, torture works by promising a reward in exchange for information. In the case of torture, the “reward” is no more torture.
Sandeep focused on one problem with this. This works only if the torturer will actually carry out his promise to stop torturing once the information is given. But once the information is given the torturer now knows he has a real terrorist and in fact a terrorist with valuable information. This will lead to more torture (for more information) not less. Unless the torturers have some way to tie their hands and stop torturing after a few tidbits of information, the captive soon figures out that there is no incentive to talk and stops talking. A well-trained terrorist knows this from the beginning and never talks.
Let me point out yet another problem with torture. This one cannot be solved even by enabling the torturers to commit to an incentive scheme.
The very nature of an incentive scheme is that it treats different people differently. To be effective, torture has to treat the innocent different than the guilty. But not in the way you might guess.
Before we commence torturing we don’t know in advance what information the captive has, and indeed we don’t know for sure that he is a terrorist at all, even though we might be pretty confident. A captive who really has no information at all is not going to talk. Or if he does he is not going to give any valuable information, no matter how much he would like to squeal and stop the torture.
And of course the true terrorist knows that we don’t know for sure that he is a terrorist. He would like to pretend that he has no information in hopes that we will conclude he is innocnent and stop torturing him. Therefore the torture must ensure that the captive, if he is indeed an informed terrorist, won’t get away with this. With torture as the incentive mechanism, the only way to do this is to commit to torture for an unbearably long time if the captive doesn’t talk.
And this leads us to the problem. In the face of this, the truly informed terrorist begins talking right away in order to avoid the torture. The truly innocent captive cannot do that no matter how much he would like to. And so torture, if it is effective at all, necessarily inflicts unbearable suffering on the innocent and very little suffering on the actual terrorists.
From Michael Schwarz:
A Russian soldier comes home after years as a POW in Afghanistan. He tells his story: “I was cold, hungry, beaten, tortured and interrogated every day.” Asked if he confessed to anything, the soldier says, “Not a word, they would beat me and beat me but I simply told them again and again I do not know how the AK47 is designed. They got nothing out of me.”
“Very good,” his commanders were pleased. They asked the soldier if he has any words of advice to the new recruits, and the soldier replied, “Yes. You should pay close attention when they teach you the design of AK47.”
The main logic of torture is to inflict so much pain that the victim reveals all his information to make the pain stop. Incentives for truth-telling in this situation are eerily similar to those in the bank stress tests.
All banks want to report that they are healthy. To distinguish the lying sick banks from the healthy ones there has to be some verifiable information. Healthy ones have this information (e.g. they passed the stress test) and the sick ones do not. The healthy banks have to have the incentive to reveal the information. This is all too clear for the healthy banks: by revealing their results they can avoid bank runs, get liquid, start lending etc.
In the torture analogy, a healthy bank is an informed terrorist with real information of an attack and a sick bank is someone, say an uninformed terrorist, with no information. The assumption of the pro-torture people is that the informed terrorist will have the incentive to report his information to avoid pain. But an uninformed terrorist has the same incentive. To tell one from the other, the informed terrorist’s information has to be verifiable. For example, there has to be “chatter” on Al Qaeda websites that can be used to cross-check the veracity of the torture victim’s confession. If this information is out there anyway, one might ask why torture was necessary in the first place. Presumably, the information is vague or ambiguous. The torture victim’s information brings some clarity.
This process seems heavily error-prone. False confessions may also cross-check by accident. The information is so noisy that a lead that is very weak may be thought to be strong. The more noise there is, the more the victim’s report is uninformative cheap talk – it contains no true information as informed and uninformed terrorists all give information, false and true and impossible to distinguish.
There is a second problem. Once a healthy bank releases the stress test information, the game is over – the market has the information and responds correspondingly. A torture victim faces further torture. There is no way for the interrogator to commit to stop torturing. If the victim knows this beforehand, the victim lies in the first place as there is no way to stop the torture. If the victim finds this out between episodes of water-boarding, the victim might start lying to contradict their earlier true confessions.
The efficacy of torture relies on verifiability of information and the ability of the torturer to commit to stop if good information is revealed. Both properties are hard if not impossible to satisfy in practice.
Legalization of marijuana has gained some momentum recently in terms of conspicuous support in the press, expansion of medical marijuana freedoms, and relaxation of enforcement (see especially this article.) The argument is often made that the tremendous expense in terms of lives and money of the war on drugs does not justify whatever moral benefit there is of minimizing drug use.
But legalization would only make the drug war more costly. The reason is simple. Legalized pot does not reduce the incentive of government and its lobbyists to fight back the illegal market, in fact it only adds to that incentive.
The effort spent prosecuting the war on drugs is determined by a balance between the marginal cost of additional enforcement and the marginal benefit of reduced consumption. When pot is illegal, that marginal benefit comes from the (perceived by lobbyists) moral and cultural virtue. When pot is legal, the marginal cost of enforcement is the same but to the moral benefit would be added the financial stake in licensing legal producers and taxing consumption.
The government’s revenue from licensing and taxation of marijuana sales relies on foreclosing the black market which would not be subject to taxation and therefore would clear at a lower price. Government policy under legalized marijuana would be shaped by basic economics. The effort in fighting the black market imposes a cost on illicit producers which acts effectively as a tax. The level of that tax determines the market price in the black market and this is the maximum price that the legal market can sustain. If we start with the level of enforcement currently in place, this translates to a certain tax revenue that the government would earn were it to legalize pot and keep enforcement at its current level.
But raising the level of enforcement would allow the government to raise taxes on the legal trade. Since the marginal cost of enforcement was already equal to the marginal benefit based on moral considerations, the additional marginal benefit from increased taxes means that the government will increase enforcement.*
If you favor legalization of marijuana for libertarian reasons (or if you are just hoping for cheaper weed), you should instead push for a relaxation of enforcement without decriminalization (as the Obama administration is reportedly acquiescing to.) Decriminalization would give the create a vested interest in the drug war that would be hard to undo.
(*Theoretically, it is possible that, say, excise tax revenue would be increased by increasing consumption at the margin rather than decreasing. This would be true only if the current level of enforcement is already holding the black market price above the monopoly price. It is hard to believe that this is true today. )
This story reports that Pakistan’s secret service, the ISI, puts different terrorist groups into different categories:
American officials said that the S Wing provided direct support to three major groups carrying out attacks in Afghanistan: the Taliban based in Quetta, Pakistan, commanded by Mullah Muhammad Omar; the militant network run by Gulbuddin Hekmatyar; and a different group run by the guerrilla leader Jalaluddin Haqqani.
Dennis C. Blair, the director of national intelligence, recently told senators that the Pakistanis “draw distinctions” among different militant groups.
“There are some they believe have to be hit and that we should cooperate on hitting, and there are others they think don’t constitute as much of a threat to them and that they think are best left alone,” Mr. Blair said.
The Haqqani network, which focuses its attacks on Afghanistan, is considered a strategic asset to Pakistan, according to American and Pakistani officials, in contrast to the militant network run by Baitullah Mehsud, which has the goal of overthrowing Pakistan’s government.
Note that the main distinction is whether the terrorism is aimed inwards into the country or outwards against others, as my earlier post suggests.
When will the median voter in a country support terrorist activity? It depends on whether the terrorism is directed inwards into the country, or outward against an opponent.
For example, the terror acts emanating from Pakistan and directed towards India or vice-versa might be supported by the average citizen in each country. India and Pakistan have a Cold War mentality that makes the average citizen hostile towards the other nation. Democratic leaders who fight cross-border terrorism may alienate the voters. A dictator can survive in power even without the average citizen’s approval. This implies that an outsider like the U.S. which does not want cross-border terrorism (perhaps it also generates attacks on the U.S.) favors a dictatorship in Pakistan over democracy. This is the kind of rationale behind a preference for Musharraf over Nawaz Sharif.
But there is a second effect. If a leader starts fighting terrorists, they can turn violence inwards. Al Qaeda in Iraq started fighting not only the US forces but attacking the population. Eventually, the population turned on Al Qaeda in Iraq. At that point, a democratic leader has a better incentive to control terrorism than a dictator. The dictator may fear for his own life and is not subject to election. He has all the incentive not to eliminate terrorism. If he deal with it too effectively, it eliminates his main reason for being in power in the first place. A democratic leader has to respect the wishes of the average citizen to survive in power. If the average citizen suffers from inward-directed terrorism, a democratic leader has to deal with it to survive in power. This effect favors democracy over dictatorship if the objective is to eliminate terrorism.
There are two countervailing effects in even this simple theory. In any program of democratization to reduce terrorism, we have to make sure the median citizen in the country being democratized shares our preferences. This is the simple fact that was overlooked when Palestinian elections were encouraged and the American administration was surprised as Hamas won the election.
What is the incentive of the Pakistani government to catch terrorists and hoe does it depend on how democratic the government is?
A democratic leader’s incentives are driven by the desire to get re-elected. Suppose voters vote retrospectively – that is they are backward looking and punish the leader for bad performance. (This can be made forward-looking by adding some story about political competence revealed by performance.)
If terrorism adversely affects the “voters” but voting is not occurring as the country is a dictatorship, it’s optimal for the U.S. to promote democratization. A leader motivated by re-election has better incentives to reduce terrorism. But if terrorists are supported by the median voter, there is no incentive to promote democratization. In fact, if the dictator is threatened by terrorists, it is better to have a dictator in place.
So, a realist perspective suggests only partial support for spreading democracy. The “model” above is very simple but would already suggest checking the preferences of the average voter before pursuing democratization. Hamas anyone?
This is only a sketch but there are alkso sorts of more subtle incentive issues that come out of it. Future posts. Maybe Jeff can get in on the game?
The New York Times describes the Israeli strategy in the recent war in Gaza as follows:
The Israeli theory of what it tried to do here is summed up in a Hebrew phrase heard across Israel and throughout the military in the past weeks: “baal habayit hishtageya,” or “the boss has lost it.” It evokes the image of a madman who cannot be controlled.
“This phrase means that if our civilians are attacked by you, we are not going to respond in proportion but will use all means we have to cause you such damage that you will think twice in the future,” said Giora Eiland, a former national security adviser.
It is a calculated rage. The phrase comes from business and refers to a decision by a shop owner to cut prices so drastically that he appears crazy to the consumer even though he knows he has actually made a shrewd business decision.
I think the word “consumer” should be replaced by the word “entrant” for this passage to make complete sense – consumers like lower prices, entrants do not. Then, the Israeli strategy becomes the classic story about predation: When an entrant dares to enter a market, the incumbent may want to prove he is “tough”, cut his price drastically and drive the entrant out of the market. This will also help the entrant “to think twice in the future” as they say above and deter future entry. This assumes the entrant has nothing to prove. But Hamas also want to prove its tough. If it backs off now, then Israel will learn that Hamas is soft and will surely push the advantage in a future war. So, Hamas has the same incentives as Israel and will not back down. That is, the possibility of future war and the reputation each player wants in that war makes both players tougher. So the war can be very very terrible. For a preliminary model along these lines see my paper “Reputation and Conflict” with Tomas Sjöström. For the Prime Minister this is a ” ‘el harb el majnouna,’ the mad or crazy war”. And it’s all unfortunately quite rational:
Shlomo Brom, a researcher at the Institute for National Security Studies at Tel Aviv University and a retired brigadier general, said it was wrong to consider Hamas a group of irrational fanatics.
“I have always said that Hamas is a very rational political movement,” he said. “When they use suicide bombings, for example, it is done very consciously, based on calculations of the effectiveness of these means. You see, both sides understand the value of calculated madness. That is one reason I don’t see an early end to this ongoing war.”
I say unfortunately because I hope (irrationally?!) that rational behavior can be taught and irrationality eliminated. But if crazy behavior is rational, what are we to do?
This show is meant to be related to my research so I’m trying to get into it. But it’s really hard! There are sequences with kinetic violence. They remind me of the the Michael Mann movie Heat (better than his later movie Miami Vice!). But it’s so strategically unsophisticated it gets boring. Yesterday’s main dilemma was whether to get a double-agent to get to reveal his information by threatening his innocent wife and kid. Nice people say No but Jack Bauer says Yes. That’s the usual dilemma explored by 24, apparently one of John McCain’s favorite shows: Do we have to become as bad as the terrorists to beat the terrorists? It would be nice if sometimes Bauer was wrong and a “be nice” strategy pays off. I haven’t seen too many episodes – I got bored last season and have been more focused on cooking shows and referee reports this season. But are there any episodes with any moral or strategic complexity other than the obvious dilemma I described?