You are currently browsing the tag archive for the ‘vapor mill’ tag.
Its easy to make up just-so stories to explain differences across siblings as being caused by birth-order. This article casts doubt on the significance of birth order.
But we can ask the question of whether birth order should matter and in what ways. Should natural selection imply systematic differences between older and younger siblings? Here is one argument that it should. Siblings “share genes” and as a consequence siblings have an evolutionary incentive to help each other. Birth order creates an asymmetry in the ways that different siblings can help each other. In particular, oldest siblings learn things first. They are the first to experiment with different survival strategies. The results of these experiments benefit all of the younger siblings. (Am I a good hunter? If so, my siblings are likely to be good hunters too.) Younger siblings have less to offer their older siblings on this dimension.
As a result we should expect older siblings to be more experimental than their younger siblings and more experimental than only children.
Here is evidence that older siblings have more years of education than younger siblings and more years of education than only children.
Tennis scoring differs from basketball scoring in two important ways. First, in tennis, points are grouped into games (and games into sets) and the object is to win games, not points. If this were the only difference, then it would be analogous to the difference between a popular vote and the electoral college in US Presidential elections.
The other difference is that in basketball the team with the highest score at the (pre-determined) end of the game wins, whereas in tennis winning a game requires a pre-specified number of points and you must win by two. The important difference here is that in tennis you know which are the decisive points whereas in basketball all points are perfect substitutes.
Then to assess statistically whether tennis’ unique scoring system favors the stronger or weaker player (relative to a cumulative system like basketball) we could do the following. Count the total number of points won by each player in decisive and non-decisive points separately (perhaps dividing the sample first according to who is serving.) First ask whether the score differential is different for these two scenarios. One would guess that it is and that the stronger player has a larger advantage in the decisive points. (According to my theory, the reason is that the stronger player can spend less effort on the non-decisive points and still be competitive, thus reserving more effort for the decisive points.) Call this difference-in-differential the decisiveness effect.
Then compare matches pitting two equal-strength players with matches pitting a stronger player against a weaker player. Ask whether the decisiveness effect is larger when the players are unequally matched. If so, then that would suggest that grouped scoring accentuates the advantage of the stronger player.
The US Open is here. From the Straight Sets blog, food for thought about the design of a scoring system:
A tennis match is a war of attrition that is won after hundreds of points have been played and perhaps a couple of thousand shots have been struck.On top of that, the scoring system also very much favors even the slightly better player.
“It’s very forgiving,” Richards said. “You can make mistakes and win a game. Lose a set and still win a match.”
Fox said tennis’s scoring system is different because points do not all count the same.
“Let’s say you’re in a very close match and you get extended to set point at 5-4,” Fox said, referring to a best-of-three format. “There may be only four or five points separating you from you opponent in the entire match. And yet, if you win that first set point, you’ve essentially already won half the match. Half the match! And not only that — your opponent goes back to zero. They have to start completely over again. And the same thing happens in every game, not just each set. The loser’s points are completely wiped out. So there are these constant pressure points you’re facing throughout the match.”
There are two levels at which to assess this claim, the statistical effect and the incentive effect. Statistically, it seems wrong to me. Compare tennis scoring to basketball scoring, i.e. cumulative scoring. Suppose the underdog gets lucky early and takes an early lead. With tennis scoring, there is a chance to consolidate this early advantage by clinching a game or set. With cumulative scoring, the lucky streak is short-lived because the law of large numbers will most likely eradicate it.
The incentive effect is less clear to me, although my instinct suggests it goes the other way. Being a better player might mean that you are able to raise your level of play in the crucial points. We could think of this as having a larger budget of effort to allocate across points. Then grouped scoring enables the better player to know which points to spend the extra effort on. This may be what the latter part of the quote is getting at.
Via kottke.org, an article in New Scientist on the mathematics of gambling. One bit concerns arbitrage in online sports wagering.
Let’s say, for example, you want to bet on one of the highlights of the British sporting calendar, the annual university boat race between old rivals Oxford and Cambridge. One bookie is offering 3 to 1 on Cambridge to win and 1 to 4 on Oxford. But a second bookie disagrees and has Cambridge evens (1 to 1) and Oxford at 1 to 2.
Each bookie has looked after his own back, ensuring that it is impossible for you to bet on both Oxford and Cambridge with him and make a profit regardless of the result. However, if you spread your bets between the two bookies, it is possible to guarantee success (see diagram, for details). Having done the calculations, you place £37.50 on Cambridge with bookie 1 and £100 on Oxford with bookie 2. Whatever the result you make a profit of £12.50.
I can verify that arbitrage opportunites abound. In my research with Toomas Hinnosaar on sorophilia, we investigated an explanation involving betting. In the process we discovered that the many online bookmakers often quote very different betting lines for basketball games.
How could bookmakers open themselves up to arbitrage and still stay in business? Here is one possible story. First note that, as mentioned in the quote above, no one bookmaker is subject to a sure losing bet. The arbitrage involves placing bets at two different bookies.
Now imagine you are one of two bookmakers setting the point spread on a Clippers-Lakers game and your rival bookie has just set a spread of Lakers by 5 points. Suppose you think that is too low and that a better guess at the spread is Lakers by 8 points. What spread do you set?
Lakers by 6. You create an arbitrage opportunity. Gamblers can place two bets and create a sure thing: with you they take the Clippers and the points. With your rival they bet on the Lakers to cover. You will win as long as the Lakers win by at least 7 points, which is favorable odds for you (remember you think that Lakers by 8 is the right line.) Your rival loses as long as the the Lakers win by at least 6 points, which is unfavorable odds for your rival. You come away with (what you believe to be) a winning bet and you stick your rival with a losing bet.
Now this begs the question of why your rival stuck his neck out and posted his line early. The reason is that he gets something in return: he gets all the business from gamblers wishing to place bets early. Put differently, when you decided to wait you were trading off the loss of some business during the time his line is active and yours is not versus the gain from exploiting him if he sets (what appears to you to be) a bad line.
Since both of you have the option of playing either the “post early” or “wait and see” strategy, in equilibrium you must both be indifferent so the costs and benefits exactly offset.
Of course, with online bookmaking the time intervals we are talking about (the time only one line is active before you respond, and the time it takes him to adjust to your response, closing the gap) will be small, so the arbitrage opportunities will be fleeting. (As acknowledged in the New Scientist article.)
There is strategy involved in giving and interpreting compliments. Let’s say you hear someone play a difficult –but not too difficult– piece on the piano, and she plays it well. Is it a compliment if you tell her she played it beautifully?
That depends. You would not be impressed by the not-so-difficult piece if you knew that she was an outstanding pianist. So if you tell her you are impressed, then you are telling her that you don’t think she is an outstanding pianist. And if she is, or aspires to be, an outstanding pianist, then your attempted compliment is in fact an insult.
This means that, in most cases, the best way to compliment the highly accomplished is not to offer any compliment at all. This conveys that all of her fine accomplishments are exactly what you expected of her. But, do wait for when she really outdoes herself and then tell her so. You don’t want her to think that you are someone who just never gives compliments. Once that is taken care of, she will know how to properly interpret your usual silence.
In the world of blogs, when you comment on an article on another blog, it is usually a nice compliment to provide a link to the original post. This is a compliment because it tells your readers that the other blog is worth visiting and reading. But you may have noticed that discussions of the really well-known blogs don’t come with links. For example, when I comment on an article posted at a blog like Marginal Revolution, I usually write merely “via MR, …” with no link.
That’s the best way to compliment a blog that is, or aspires to be, really well-known. It proves that you know that your readers already know the blog in question, know how to get there, and indeed have probably already read and pondered the article being discussed.
Via MR, this article describes the obstacles to a market for private unemployment insurance. Why is it not possible to buy an insurance policy that would guarantee your paycheck (or some fraction of it) in the event of unemployment? The article cites a number of standard sources of insurance market failure but most of these apply also to private health insurance, and other markets and yet those markets function. So there is a puzzle here.
The main friction is adverse selection. Individuals have private information about (and control over!) their own likelihood of becoming unemployed. The policy will be purchased by those who expect that they will become unemployed. This makes the pool of insured especially risky, forcing the insurer to raise premiums in order to avoid losses. But then the higher premiums causes a selection of even more risky applicants, etc. This can lead to complete market breakdown.
In the case of unemployment insurance there is a potential solution to this problem which borrows from the idea of instrumental variables in statistics. (Fans of Freakonomics will recognize this as one of the main tools in the arsenal of Steve Levitt and many empirical economists.) The idea behind instrumental variables is that it sidesteps a sample selection problem in statistical analysis by conditioning on a variable which is correlated with the one you care about but avoids some additional correlations that you want to isolate away.
The same idea can be used to circumvent an adverse selection problem. Instead of writing a contract contingent on your employment outcome, the contract can be contingent on the aggregate unemployment rate. You pay a premium, and you receive an adjustment payment (or stream of payments) when the aggregate unemployment rate in your locale increases above some threshold.
Since the movements in the aggregate unemployment rate are correlated with your own outcome, this is valuable insurance for you. But, and this is the key benefit, you have no private information about movements in the aggregate unemployment rate. So there is no adverse selection problem.
The potential difficulty with this is that there will be a lot of correlation in movements in unemployment across locations, and this removes some of the risk-sharing economies typical of insurance. (With fire insurance, each individual’s outcome is uncorrelated with everyone else, so an insurer of many households faces essentially no risk.)
You are out for dinner and your friend is looking at the wine list and gives you “There’s a house wine and then there’s this Aussie Shiraz that’s supposed to be good, what do you think?”
How you answer depends a lot on how long you have known the person. If it was my wife asking me that I would not give it a moment’s thought and go for the Shiraz. If it was someone I know much less about then I would have to think about the budget, I would ask what the house wine was, what the prices were, etc. Then I would give my considered opinion expecting it to be appropriately weighed alongside his.
This is a typical trend in relationships over time. As we come to know one another’s preferences we exchange less and less information on routine decisions. On the one hand this is because there is less to learn, we already know each other very well. But there is a secondary force which squelches communication even when there is valuable information to exchange.
As we learn one another’s preferences, we learn where those preferences diverge. The lines of disagreement become clearer, even when the disagreement is very minor. For example, I learn that I like good wine a little bit more than my wife. Looking at the menu, she sees the price, she sees the alternatives and I know what constellation of those variables would lead her to consider the Shiraz. Now I know that I have a stronger preference for the Shiraz, so if she is even considering it that is enough information for me to know that I want it.
Sadly, my wife can think ahead and see all this. She knows that merely suggesting it will make me pro-Shiraz. She knows, therefore, that my response contains no new information and so she doesn’t even bother asking. Instead, she makes the choice unilaterally and its house wine here we come. (Of course waiters are also shrewd game theorists. They know how to spot the wine drinker at the table and hand him the wine list.)
In every relationship there will be certain routine decisions where the two parties have come to see a predictable difference of opinion. For those, in the long run there will be one party to whom decision-making is delegated and those decisions will almost always be taken unilaterally. Typically it will be the party who cares the most about a specific dimension who will be the assigned the delegate, as this is the efficient arrangement subject to these constraints.
Some relationships have a constitution that prevents delegation and formally requires a vote. Take for example, the Supreme Court. As in recent years when the composition of the court has been relatively stable, justices learn each others’ views in areas that arise frequently.
Justice Scalia can predict the opinion of Justice Ginsburg and Scalia is almost always to the right of Ginsburg. If, during delibaration, Justice Ginsburg reveals any leaning to the right, this is very strong information to Scalia that the rightist decision is the correct one. Knowing this, Ginsburg will be pushed farther to the left: she will express rightist views only in the most extreme cases when it is obvious that those are correct. And the equal and opposite reaction pushes Scalia to the right.
Eventually, the Court becomes so polarized that nearly every justice’s opinions can be predicted in advance. And in fact they will line up on a line. If Breyer is voting right then so will Kennedy, Alito, Roberts, Scalia, and Thomas. If Kennedy is voting left then so are Breyer, Souter, Ginsberg, and Stevens. Ultimately only the centrist judges (previously O’Connor, now Kennedy) are left with any flexibility and all cases are decided 5-4.
When a new guy rotates in, this can upset the equilibrium. There is something to learn about the new guy. There is reason to express opinion again, and this means that something new can be learned about the old guys too. We should see that the ordering of the old justices can be altered after the introduction of a new justice. (Don’t expect this from Sotomayor because she has such a long paper trail. Her place in line has already been figured out by all.)
How do you cut the price of a status good?
Mr. Stuart is among the many consumers in this economy to reap the benefits of secret sales — whispered discounts and discreet price negotiations between customers and sales staff in the aisles of upscale chains. A time-worn strategy typically reserved for a store’s best customers, it has become more democratized as the recession drags on and retailers struggle to turn browsers into buyers.
Answer: you don’t, at least not publicly. Status goods have something like an upward sloping demand curve. The higher is the price, the more people are willing to pay for it. So the best way to increase sales is to maintian a high published price but secretly lower the price.
Of course, word gets out. (For example, articles are published in the New York Times and blogged about on Cheap Talk.) People are going to assign a small probability that you bought your Burberry for half the price, making you half as impressive. An alternative would be to lower the price by just a little, but to everybody. Then everybody is just a little less impressive.
So implicitly this pricing policy reveals that there is a difference in the elasticity of demand with respect to random price drops as opposed to their certainty equivalents. Somewhere some behavioral economists just found a new gig.
One of the simplest and yet most central insights of information economics is that, independent of the classical technological constraints, transactions costs, trading frictions, etc., standing in the way of efficient employment of resources is an informational constraint. How do you find out what the efficient allocation is and implement it when the answer depends on the preferences of individuals? Any institution, whether or not it is a market, is implicitly a channel for individuals to communicate their preferences and a rule which determines an allocation based on those preferences. Understanding this connection, individuals cannot be expected to faithfully communicate their true preferences unless the rule gives them adequate incentive.
As we saw last time there typically does not exist any rule which does this and at the same time produces an efficient allocation. This result is deeper than “market failure” because it has nothing to do with markets per se. It applies to markets as well as any other idealized institution we could dream up.
So how are we to judge the efficiency of markets when we know that they didnt have any chance of being efficient in the first place? That is the topic of this lecture.
Let’s refer to the efficient allocation rule as the first-best. In the language of mechanism design the first-best is typically not feasible because it is not incentive-compatible. Given this, we can ask what is the closest we can get to the first best using a mechanism that is incentive compatible (and budget-balanced.) That is a well-posed constrained optimization problem and the solution to that problem we call the second best.
Information economics tells us we should measure existing institutions relative to the second best. In this lecture I demonstrate how to use the properties of incentive-compatibility and budget balance to characterize the second-best mechanism in the public goods problem we have been looking at. (Previously the espresso machine problem.)
I am particularly proud of these notes because as you will see this is a complete characterization of second-best mechanisms (remember: dominant strategies)for public goods entirely based on a graphical argument. And the characterization is especially nice: any second-best mechanism reduces to a simple rule where the contributors are assigned ex ante a share of the cost and asked whether they are willing to contribute their share. Production of the public good requires unanimity.
For example, the very simple mechanism we started with, in which two roomates share the cost of an espresso machine equally, is the unique symmetric second-best mechanism. We argued at the beginning that this mechanism is inefficient and now we see that the inefficiency is inevitable and there is no way to improve upon it.
Here are the notes.
Top chess players, until recently, held their own against even the most powerful chess playing computers. These machines could calculate far deeper than their human opponents and yet the humans claimed an advantage: intuition. A computer searches a huge number of positions and then finds the best. For an experienced human chess player, the good moves “suggest themselves.” How that is possible is presumably a very important mystery, but I wonder how one could demonstrate that qualitatively the thought process is different.
Having been somewhat obsessed recently with Scrabble, I thought of the following experiment. Suppose we write a computer program that tries to create words from scrabble tiles using a simple brute-force method. The computer has a database of words. It randomly combines letters and checks whether the result is in its database and outputs the most valuable word it can identify in a fixed length of time. Now consider a contest between to computers programmed in the same way which differ only in the size of their database, the first knowing a subset of the words known by the second. The task is to come up with the best word from a fixed number of tiles. Clearly the second would do better, but I am interested in how the advantage varies with the number of tiles. Presumably, the more tiles the greater the advantage.
I want to compare this with an analogous contest between a human and a computer to measure how much faster a superior human’s advantage increases in the number of tiles. Take a human scrabble player with a large vocabulary and have him play the same game against a fast computer with a small vocuabulary. My guess is that the human’s advantage (which could be negative for a small number of tiles) will increase in the number of tiles, and faster than the stronger computer’s advantage increased in the computer-vs-computer scenario.
Now there may be many reasons for this, but what I am trying to get at is this. With many tiles, brute-force search quickly plateaus in terms of effectiveness because the additional tiles act as noise making it harder for the computer to find a word in its database. But when humans construct words, the words “suggest themselves” and increasing the number of tiles facilitates this (or at least hinders it more slowly than it hinders brute-force.)
Until 2010 that is, whereupon its time to shuffle it:
If Congress doesn’t act, the estate tax will disappear in 2010 but will return in 2011 at the pre-2001 level of $1 million with a tax rate of 55%.
That could generate some interesting data.
No, not because of this, although it can get rough.
I teach the third course in the first year PhD micro sequence at Northwestern and I also teach my intermediate micro course in the Spring. I am just finishing up teaching this week and my students will soon be writing their evaluations of me. They will grade me on a scale of 1 to 6.
Because I am the third and last teacher they will evaluate this year, I face some additional risk that my predecessors did not. Back in the fall, when they evaluated their first teacher they had only one data point with which to estimate the distribution of teaching ability in the Northwestern economics faculty. An outstanding performance would lead them to revise upward their beliefs and a poor performance would revise their beliefs downward.
As a result, when the students sit down to evaluate their fall professor, even a very good performance will earn at most a 5 because the students, now anticipating higher average performance in the winter and spring, will be inclined to hold that 6 in reserve for the best. Likewise, very bad performances will have their ratings buoyed by the student’s desire to save the 1 for the worst.
When Spring comes, there is nothing more to learn. By now they know the distribution and the only thing left to do is to rank their Spring professor relative to those who came earlier. If he is best he gets a 6, if not he gets at most a 4. His rating is a mean-preserving spread of the previous ratings.
There is a general principle at work here. The older you get the more you know about your opportunity costs, the more decisively you act in response to unanticipated opportunities. (There is a countervailing force which I believe on net makes us more conservative when we get older, but that is the topic of a later post.)
OK so I am apparently obsessed with this theme, but I guess that is what makes me a blogger.
Research, like a lot of collaborative activities, encourages specialization. Successful co-authorships often combine people with differentiated skills. So successful co-authors are complementary which means that your co-author’s other co-authors are substitutes for you. This should imply that you are less likely, other things equal, to have a successful co-authorship with your co-author’s co-authors than with, say a randomly selected collaborator.
If we tried to look for evidence of this in data the difficulty would be in holding other things equal. You are more likely to talk to and have other things in common with your co-author’s co-author than with a random researcher so this would have to be controlled for.
These issues make me think there is some really interesting research waiting to be done taking data from social networks, like patterns of co-authorship or frienship relations on Facebook and trying to simultaneously identify (in the formal sense of that word) “types” (e.g. technician vs idea-man) and preferences (e.g. whether these types are complements or substitutes.) The really interesting part of this must be the econometric theory saying what are the limits of what can and cannot be identified.
Sandeep has previously blogged about the problems with torture as a mechanism for extracting information from the unwilling. As with any incentive mechanism, torture works by promising a reward in exchange for information. In the case of torture, the “reward” is no more torture.
Sandeep focused on one problem with this. This works only if the torturer will actually carry out his promise to stop torturing once the information is given. But once the information is given the torturer now knows he has a real terrorist and in fact a terrorist with valuable information. This will lead to more torture (for more information) not less. Unless the torturers have some way to tie their hands and stop torturing after a few tidbits of information, the captive soon figures out that there is no incentive to talk and stops talking. A well-trained terrorist knows this from the beginning and never talks.
Let me point out yet another problem with torture. This one cannot be solved even by enabling the torturers to commit to an incentive scheme.
The very nature of an incentive scheme is that it treats different people differently. To be effective, torture has to treat the innocent different than the guilty. But not in the way you might guess.
Before we commence torturing we don’t know in advance what information the captive has, and indeed we don’t know for sure that he is a terrorist at all, even though we might be pretty confident. A captive who really has no information at all is not going to talk. Or if he does he is not going to give any valuable information, no matter how much he would like to squeal and stop the torture.
And of course the true terrorist knows that we don’t know for sure that he is a terrorist. He would like to pretend that he has no information in hopes that we will conclude he is innocnent and stop torturing him. Therefore the torture must ensure that the captive, if he is indeed an informed terrorist, won’t get away with this. With torture as the incentive mechanism, the only way to do this is to commit to torture for an unbearably long time if the captive doesn’t talk.
And this leads us to the problem. In the face of this, the truly informed terrorist begins talking right away in order to avoid the torture. The truly innocent captive cannot do that no matter how much he would like to. And so torture, if it is effective at all, necessarily inflicts unbearable suffering on the innocent and very little suffering on the actual terrorists.
Following up on Sandeep’s post about Alex Rodriguez’s alleged pitch-tipping, a game theorist is naturally led to ask a few questions. How is a tipping ring sustainable? If it is sustainable what is the efficient pitch-tipping scheme? Finally, how would we spot it in the data?
A cooperative pitch-tipping arrangement would be difficult, but not impossible to support. Just as with price-fixing and bid-rigging schemes, maintaining the collusive arrangement benefits the insiders as a group, but each individual member has an incentive to cheat on the deal if he can get away with it. Ensuring compliance involves the implicit understanding that cheaters will be excluded from future benefits, or maybe even punished.
What would make this hard to enforce in the case of pitch-tipping is that it would be hard to detail exactly what compliance means and therefore hard to reach any firm understanding of what behavior would and would not be tolerated. For example, if the game is not close but its still early innings is the deal on? What if the star pitcher is on the mound, maybe a friend of one of the colluders? Sometimes the shortstop might not be able to see the sign or he is not privy to an on-the-fly change in signs between the pitcher and catcher. If he tips the wrong pitch by mistake, will he be punished? If not, then he has an excuse to cheat on the deal.
These issues limit the scope of any pitch-tipping ring. There must be clearly identifiable circumstances under which the deal is on. Provided the colluders can reach an understanding of these bright-lines, they can enforce compliance.
There is not much to gain from pitch-tipping when the deal becomes active only in highly imbalanced games. But the most efficient ring will make the most of it. A deal between just two shortstops will benefit each only when their two teams meet. A rare occurrence. Each member of the group benefits if a shortstop from a heretofore unrepresented team is allowed in on the deal. Increasing the value of the deal has the added benefit of making exclusion more costly and so helps enforcement. So the most efficient ring will include the shortstop from every team. Another advantage of including a player from every team in the league is that it would make it harder to detect the pitch-tipping scheme in the data. If instead some team was excluded then it would be possible to see in the data that A-Rod hit worse on average against that team, controlling for other factors.
But it should stop there. There is no benefit to having a second player, say the second-baseman, from the same team on the deal. While the second-baseman would benefit, he would add nothing new to the rest of the ring and would be one more potential cheater that would have to be monitored.
How could a ring be detected in data? One test I already mentioned, but a sophisticated ring would avoid detection in that way. Another test would be to compare the performance of the shortstops with the left-fielders. But there is one smoking gun of any collusive deal: the punishments. As discussed above, when monitoring is not perfect, there will be times when it appears that a ring member has cheated and he will have to be punished. In the data this will show up as a downgrade in that player’s performance in those scenarios where the ring is active. And to distinguish this from a run-of-the-mill slump, one would look for downgrades in performance in the pitch-tipping scenarios (big lead by some team) which are not accompanied by downgrades in performance in the rest of the game (when it is close.)
The data are available.
This morning something interesting was demonstrated on the NPR puzzle with Will Shortz. Each week a puzzle is given and listeners are given a week to email their answers. Among those with the correct answer a listener is selected at random to solve a series of puzzle lives on the air the next week.
Last week’s qualifying puzzle was unusually difficult. There was an exceptionally small number of correct answers. The best puzzle solvers right? The listener selected from this group played on the air this morning. The on-air puzzle was a relatively easy format and on a typical week the guest would get nearly all of them right. However, today’s winner got fewer than 1/4 of them. Why?
There are people who are relatively good at getting ideas quickly and there are people whose comparative advantage is in thinking hard for a longer period of time to solve harder puzzles. In chess, there are players who are good at 5 minute “blitz” time controls and those that are good at chess by mail. There is little overlap between these groups. When the qualifying puzzle is easy both types solve the puzzle. When the qualifying puzzle is hard we get a disproportionately large selection of the postalites. This means that the randomly selected listener is less likely to do well at the on-air puzzle which favors blitzers over postalites.
The Baseline Scenario has a nice overview of the political issues around the estate tax. The estate tax, politely referred to as The Death Tax, is motivated both by principles of fairness and principles of economics. The fairness motivation is obvious. And death seems like a focal moment for redistributing wealth.
The economic motivation also points toward the moment of death as a natural timing for taxation. The economic cost of taxation is the distortion of freely made choices that it induces. Sales taxes reduce the gains from trade, income taxes reduce the incentive to work, etc. On the other hand, activities and resources that are in fixed supply can be taxed without distortion. Well, death is in fixed supply, we all get exactly one. And while the timing can be controlled to some extent, the effect of income after death on its timing is surely second-order.
However, economic arguments against estate taxation point out that it distorts behavior before death increasing consumption over investment. The estate tax translates into a tax on investment because, in the event of death, a fraction of the payoff will be confiscated. Provided that a bequest is a normal good, this reduces investment.
There is a simple way to amend the estate tax to undo this distortion and increase tax revenue. The government can offer a tax shelter in the form of a life-insurance policy where the household pays c in cash to the government in return for shielding a fraction q of wealth from estate taxation. The effect is to capture some of what would have been extra expenditure on consumption in the form of a direct transfer to the government, and compensate the estate by reducing taxes after death.
I learned to juggle 3 balls when I was about 10. Its a great trick when you are 10 but it really comes in handy when you are a parent as endless entertainment for your kids. But now mine are getting bored and they are demanding 5 balls. Juggling 5 balls is pretty close to impossible. But with a little technology, learning to juggle 5 could be easy.
What I want is a ball that is sturdy enough for juggling but can be filled with helium in varying concentrations. Juggling a helium-filled ball would be something like juggling in low gravity. Which means the balls would fall slowly and 5 balls would be easy. With just the right concentration of helium, anyone could do it. Then, the helium concentration is gradually reduced. At each step, the difficulty increases just a little bit making it easier to master juggling the heavier and heavier balls. Finally, its all atmospheric air, and you are juggling 5 balls in no time.
Economists have many repositories of data and we are relatively good at sharing data we find. So it is easy to find out what data is available. It is not easy to find out what data is not available. If somebody goes looking for the ideal dataset for some question and discovers that it is unavailable, that result should be made public so that others don’t have to duplicate their efforts.
So we need a repository of non-existent data.
We need a centralized market for matching co-authors. I want to be able to go there with an idea and find a co-author who has some expertise in the area. I guess there are some obvious difficulties. For one thing, the researcher with the idea would worry that his idea would get stolen if he went shopping it around publicly. Also, potential co-authors would have little incentive to invest in an idea brought to the market by someone else as it would be public knowledge who was the creative partner and who was the “research assistant.”
I suppose the second-best solution is a blog.
Its a tempting hypothesis. And its entertaining to look at the wives of your relatives/close friends and theorize which attribute of their mothers they replicate (likewise for husbands/fathers.) But this seems like a difficult hypothesis to carefully test. Here is one attempt. Assemble a dataset of bi-racial families. We want the race of the father and mother, the sex of the child, and the race of the child’s spouse. To control for the racial proportions in the population, we compare the probability that a bi-racial male with a white mother marries a white wife to the probabiltity that a bi-racial male with a black mother marries a white wife. The hypothesis is that the first is larger than the second.
Now, marriage is a two-sided matching market. This means that we cannot jump to conclusions about the husband’s tastes on the basis of the characteristics of the wife. It could be that this husband would prefer a black wife (other attributes equal) but the best match he could find was with a white wife.
For example, an alternative story which would explain the above statistic is that black spouses are generally preferred but having a white father makes you a more attractive match and so bi-racial children with white fathers are more likely to match with their preferred race. (Any theory would have to explain why there was a difference in the ultimate match between those with white fathers and those with black fathers.) But the data would enable us to potentially rule this out. If this alternative story were true then bi-racial daughters with white fathers would also be more likely to marry black husbands than those with black fathers. That is, girls marrying their mothers rather than their fathers, the opposite of what the original hypothesis would predict.
So if the data showed that boys marry their mothers and girls marry their fathers, we could rule out this particular alternative story. Of course there will always be some identification problem somewhere, and here the following story would be observationally equivalent: having a white father makes you a more attractive mate, women like white men, men like black women. (Allowing men and women to have different racial preferences adds the extra degree of freedom to explain the [hypothetical] data.)
We spent the last week trying to think of a name for this blog. Because Sandeep has bad taste lots of really good names were rejected and we in the end settled for an ok but not great name, Cheap Talk.
This blog-christening process points out an important asymmetry in the creative process. It is much easier to think up interesting names for *some* blog than it is to think up names for this particular blog and these particular bloggers.
For example, some bloggers, somewhere in the blogosphere would love the name “Vapor Mill.” It’s a pun on “Paper Mill” which, especially for academics, suggests productivity. But “Paper” is replaced by “Vapor” which turns it into a symbol for fanciful and ultimately useless ideas.
But those bloggers are almost surely not going to think of that phrase if they just sit down and search their brains. I am not saying it takes great creativity to come up with it. Its almost purely accidental. But that accident happened to me and not to them and unless the name finds them there is lost welfare.
Yes the welfare loss is tiny but every time you have a specific purpose that you are looking for an idea to fit just right you come up with many good ideas that don’t quite fit your specific purpose but would be really great for somebody else’s purpose and each time a valuable thing just disappears. It adds up.
I guess its an argument for the space program and all of the resulting Tang that comes with it.
Hey, that’s a great name for a blog!: Tang.
(appendix: I hate the word blogosphere and I can’t believe that I only lasted one post in my short blogging career before I had to use it.)
