You are a poor pleb working in a large organization. Your career has reached a stage where you are asked to join one of two divisions, division A or division B. You can’t avoid the choice even if you prefer the status quo – it would be bad for your career. Each division is controlled by a boss. Boss A is sneaky and self-serving. perhaps he is “rational” in the parlance of economics. Even better, perhaps his strategy is quite transparent to you after a brief chat with him so you can predict his every move. He is the Devil you know. Boss B might be rational or might be somewhat altruistic and have your best interests at heart. He is the Devil you don’t know. Neither boss is going anywhere soon and you have no realistic chance of further advancement. You will be interacting frequently with the boss of the division you choose.
Which division should you join?
You face a trade-off it seems. If you join division A, it is easier for you to play a best-response to boss A’s strategy – you can pretty much work out what it is. If you join division B, it is harder but the fact that you don’t know can help your strategic interaction.
For example, suppose you are playing a game where “cooperation” is not an equilibrium if it is common knowledge that both players are rational – the classical story is the Prisoner’s Dilemma. Then, the incomplete information might help you to cooperate. If you do not cooperate, you reveal you are rational and the game collapses into joint defection. If you cooperate, you might be able to sustain cooperation well into the future (this is the famous work of Kreps, Milgrom, Roberts and Wilson).
On the other hand, if you are playing a pure coordination game, this logic is less useful. All you care about is the action the other player is going to take and you want to play a best response to it. So, the division you should join depends on the structure of the later boss-pleb game.
Perhaps it is possible to frame this question in such a way that the existing reputation and game theory literature tells us if and when incomplete information should be welcomed by the pleb so you should play with the Devil you don’t know and when it is bad, so you should play with the Devil you know?
4 comments
Comments feed for this article
July 7, 2010 at 6:13 pm
misterxroboto
it seems that deception plays a key part here. with boss A, you’re pretty clear that, even if he defects, you know how to play it.
however, with boss B, he could give you the impression of one thing, only for you to discover later that he’s the opposite.
worst case scenario:
you work for boss B and he signals to you that he is cooperating. you play an iterated game, leading you to cooperate as well for mutual gain. however, unknown to you, boss B is deceptive. he is giving you the impression that he is cooperating, but is in fact defecting.
best case scenario:
you work for boss B who actually is cooperative. you play an iterated game and work for mutual benefit, even as a collective agent against boss B’s collective agent.
deciding factors between them:
a) your ability to judge boss B’s deceptive powers
b) your ability to play strategically against both bosses.
if there is no deception?
worst case scenario is that boss B is identical to boss A, but you don’t learn that immediately. you must accept the sunk cost in the beginning.
July 7, 2010 at 7:16 pm
Dagon
Wait – that’s not right. Your lack of knowledge can never improve your situation. In some symmetric cases mutual lack of knowledge leads to preferable outcomes over mutual knowledge (though even this is in contention, and there are rational decision theories that lead to cooperation even for one-shot PD).
That does _NOT_ imply that your knowledge is harmful to you, only that the compensation of the other side’s lack of knowledge in the mutual case can outweigh the cost of your own.
In all cases, standard calculations of expected value should suffice. Sum the chance that you will play a given subgame times the value of that subgame (which itself is the weighted-by-probability sum of outcomes of strategies that will be used against your strategy).
July 7, 2010 at 7:39 pm
misterxroboto
[Wait – that’s not right. Your lack of knowledge can never improve your situation.]
you’re right that your lack of knowledge itself is never good. but it can be instrumentally in your favor.
i think all else equal, this is true, but it’s not categorical. take, for example, someone who is not rational and always cooperates.
given the choice between that person and a person who always defects, you are going to rationally choose the cooperator assuming you want to maximize your own utility.
now add the knowledge component: you must choose between an unknown opponent and a known defector.
the only question becomes probability of what your opponent will be.
July 13, 2010 at 4:33 pm
twicker
Some quick thoughts:
1) Re: misterxroboto and “someone who is not rational and always cooperates:” — hold on a moment: why would “always cooperating” be not rational? We have pretty good data that indicate that people who “always cooperate,” a.k.a. “consistent contributors,” achieve both personal and group outcomes that are as good as, and often better than, the outcomes achieved by non-consistent contributors. While it doesn’t make sense for Homo economicus, it does appear to work this way for Homo sapiens sapiens. C.f. Weber & Murnighan (2008), Suckers or saviors? Consistent contributors in social dilemmas, JPSP 95(6), 1340-1353.
2) Re: Dagon and “Your lack of knowledge can never improve your situation.” As you put it: wait — that’s not right. Given (again) that we’re Homo sapiens sapiens, and thus are subject to severe anchoring/priming effects (along with in-group/out-group effects, etc.), a lack of knowledge can be *crucial* to improving your situation.
I’ll reverse the situation for a moment, and describe an actual situation I was in more than once as a teacher and as a camp counselor (back when I was both of those things).
Two of these times, I was teaching astronomy at Scout camp. In this class, I had several students, including a few who would sit at the front of the class, always engaged, listening to everything, and trying to understand everything. Lots of questions, lots of great interaction, etc. Awesome students.
During one week, I kept hearing about “The Devil Spawn,” a camper who was terrorizing every counselor he came in contact with. His infamy grew with every passing day, and I was absolutely thrilled that I apparently had nothing to do with this hellion. Then, on the next-to-last day, I was at the camp store and a Scout came up to buy a Coke or something. People immediately started whispering that this was him — the hellion — the Devil Spawn. I turned around, and … saw one of my star students, someone who was, quite simply, great to have in my class.
Do you honestly believe that I would have done better to have “known” him before, to have had the information that he hated most of the rest of camp and caused havoc in other classes? Do you think he would have benefited from my knowing him to be a major ADHD behavior problem? If I had been told to “watch out” for this boy, do you think he would be better off knowing that I was warned about him and warned to treat him as a threat? Personally, I don’t think so — I think we were both far, far better off not knowing anything about the other, so we could make our decisions on the basis of *our* interaction, and not simply on the basis of *other* interactions.
He wasn’t the only one: another time, a Scoutmaster came up and apologised to me for having a particular Scout in my class. I stared blankly at the Scoutmaster; the Scout wasn’t my best student, but he was absolutely no problem: did his work, participated, listened, etc. As a teacher, I often had people ask me how I could stand to work with X, or Y, or Z — which produced more blank stares, because I, being blissfully unaware that these folks were “bad,” instead expected them to be good (note that I was *always* ready to take away my respect for them if they lost it). The students lived up to my expectations, and also to the expectations of everyone who told them they were “bad.” I’m eternally grateful I didn’t have that knowledge a priori.
All this leads me to one of my biggest problems with this particular game scenario: it assumes that we “know” Devil A, when, in fact, we *don’t* “know” how Devil A will react to us as individuals, nor do we know what other changes in the context will result from our presence that will change Devil A, nor, for that matter, do we know if what we “know” is actually a constant or the result of people setting their expectations one way and getting what they expected (Google Scholar the “Pygmalion Effect” for more on this issue). You *do* have some information about Devil A, and it would be ridiculous not to use it; just don’t assume that you actually *know* Devil A.
If Devil A is known, by people I trust, to be a bad person, then I’d definitely go with Devil B. Conversely, if Devil A is known to be a good boss, then it’s more of a toss-up. If Devil B was awful, we’d have heard about it; therefore, s/he isn’t awful, though s/he may not be awesome. Awfulness news spreads faster than awesomeness news.