You are currently browsing the tag archive for the ‘the web’ tag.

Measuring social influence is notoriously difficult in observational data.  If I like Tin Hat Trio and so do my friends is it because I influenced them or we just have similar tastes, as friends often do.  A controlled experiment is called for.  It’s hard to figure out how to do that.  How can an experimenter cause a subject to like something new and then study the effect on his friends?

Online social networks open up new possibilities.  And here is the first experiment I came across that uses Facebook to study social influence, by Johan Egebark and Mathias Ekstrom.  If one of your friends “likes” an item on Facebook, will it make you like it too?

Making use of five Swedish users’ actual accounts, we create 44 updates in total during a seven month period.1 For every new update, we randomly assign our user’s friends into either a treatment or a control group; hence, while both groups are exposed to identical status updates, treated individuals see the update after someone (controlled by us) has Liked it whereas individuals in the control group see it without anyone doing so. We separate between three different treatment conditions: (i) one unknown user Likes the update, (ii) three unknown users Like the update and (iii) one peer Likes the update. Our motivation for altering treatments is that it enables us to study whether the number of previous opinions as well as social proximity matters.2 The result from this exercise is striking: whereas the first treatment condition left subjects unaffected, both the second and the third more than doubled the probability of Liking an update, and these effects are statistically significant.

Advertisers want information about your tastes and habits so they can decide how much they are willing to pay to advertise at you.  That information is stored by your web browser on your hard drive.  Did you know that every time you access a web page you freely hand over that information to a broker who immediately sells it to advertisers who then immediately use it to bid for access to your eyeballs?

Here’s how it works.  Internet vendors, say Amazon, pass information to you about your transactions and your browser stores them in the form of cookies. Later on, advertisers are alerted when you are accessing a web page and they compete in an auction for the ad space on that page.  At that moment, unless you have disabled the passing of cookies, your browser is sending to potential advertisers all of the cookies stored on your hard drive that might contain relevant information about you.

However, many of the really valuable cookies are encrypted by the web site that put them there.  For example, if Amazon encrypts its cookies then even though your browser gives them away for free, they are of no use to advertisers.

That is, unless the advertisers purchase the key with which to decrypt your cookies. And indeed Amazon will make money from your data by selling its keys to advertisers.  It could sell them directly but it will probably prefer to sell them through an exchange where advertisers come to buy cookies by the jar.

The interesting thing about the market for cookies is that you are the owner of the asset and yet all of the returns are going to somebody else.  And its not because your asset and mine are perfect substitutes.  You are the monopolistic provider of information about you and when you arrive at a website it is you the advertisers are bidding for.

How long will it be before you get a seat at the exchange?  Nothing stops you from putting a second layer of encryption over Amazon’s cookies and demanding that advertisers pay you for the key.  Nothing stops me from paying you for exclusive ownership of your keys, cornering the market-in-you, and recouping the monopoly profit.

File under “Feel Free To Profit From This Idea.

(I learned about the market for cookies from Susan Athey’s new paper and a post-seminar dinner conversation with her and Peter Klibanoff, Ricky Vohra, and Kane Sweeney.)

(Picture:  Scale Up Machine Fail from http://www.f1me.net)

This was Mallesh Pai last month:

Everyone here has heard about price discrimination. I know something about your willingness to pay (from other data about you or people like you), and use that to charge you a ‘better’ price. This has mostly been restricted by some combination of ethics, vague legal standards and technology to use ‘coarse’ information, e.g. your age (student/ senior discounts), your address (mailed coupons), and so on. As we pointed out a few months back there are cleverer methods on the way. But today, I think I’ve seen the best yet. A company calledKlout (indubitably with the cooler K-based variant of the spelling) looks into your social network and offers a ‘score’  estimating the influence you have. Some geniuses have decided that one’s ‘Klout score’ might be a good way to discriminate on what website you see (and indeed, what free swag you get offered): http://mashable.com/2011/06/22/klout-gate/ .

And this is Spotify this week proving him right.

The Spotify invites are part of the Klout Perks program, which rewards top influencers with special deals based on their interests and comprehensive Klout score rating. People who are rated as influential on Klout get access to the free trial version of the music service. They can also get a free month of Spotify’s premium service if enough people within their community sign up for the music service.

“The Spotify guys actually reached out to us about launching in the U.S. They had been using Klout and thought it was really cool,” said Klout CEO Joe Fernandez. “We talked a lot about how to hit the people in middle America that are also early adopters but don’t read the tech blogs and stuff.”

Courtney Conklin Knapp, the bloggers’ muse, offers up this link on The Eternal Shame of Your First Online Handle.  It reminds me of my personal favorite storage space for unwanted reputations:  USENET.  USENET was the earliest internet social network consisting of mostly-unmoderated discussion groups on just about any topic you can think of.

Did you know that Google has archived all of USENET and provided a search interface through its own Google Groups?  Talk about eternal shame.  Look around your department for your geekiest 40-something colleague and chances are he has a USENET trail and it may not be pretty.

I’ll leave it up to you to find the dirty laundry, but while we are here, a few notable (and perfectly respectable) USENET trails for your amusement.  You can probably guess who posted this to the group rec.sport.soccer in 1997:

I am a professor of economics doing a study of penalty kicks in soccer.  Does anyone know where I might find data on whether a kicker goes to the right or the left on a penalty kick, or whether the goalie dives right  or left?

Any information would be greatly appreciated.

But can you guess who posted this to rec.music.collecting.vinyl in 1996?

Dear Readers:

I am looking for an LP copy of Night by Night, an obscure
issue by Harry Nilsson, or the soundrack to The World’s
Greatest Lover, which he also did, or other Nilsson rarities
(I do have Flash Harry, however).

CD copies are fine as well, although I do not think they exist.

Hint:  he apparently frequented the groups soc.culture.haiti, rec.music.classical.recordings, and rec.sport.basketball.pro and he also posted this to rec.arts.movies.current-films in 1998.

Of all 1998 American movies, which are some prominent examples of movies with foreign non-American directors?  An example of a French director would be especially useful. Any assistance would be most appreciated…I am aware of Peter Weir (an Australian) directing The Truman Show, any other examples?

His full USENET trail is here, but guess before you look!

I implore you not to look at mine, and instead browse the trails of people that really matter like Hal Varian (he was writing a lot about pricing the Internet!), Sergey Brin (he seemed to be having trouble getting DOOM to run on his 486DX), Mark Zuckerberg (not much on his mind apparently)  and Austan Goolsbee.

It’s hard to model serendipity in a rational choice framework.  For example, people say that the web’s ability to focus your attention on subjects you like prevents you from being exposed to new stuff and that makes you worse off.  That may be true, but it could never be true in a rational framework because if you wanted exposure to new stuff you would choose that.  (I am leaving out market structure explanations, i.e. the market for serendipitous content may shrink.  That’s beside the point I am making and anyway I would guess exactly the opposite.  I can always take advantage of the increased diversity in content by supplying my own randomness.)

But here’s a version of serendipity that may be rationalized.  I have started reading blogs in my Google reader using the “All Items” tab where all the articles in all the blogs I subscribe to are listed in a flat format in chronological order, rather than blog-by-blog.  I have found a non-obvious effect of serendipity:  not knowing which blog I am reading and just reading the article prevents me from approaching it with expectation of the author’s prior biases, etc.  I recommend it.

For some kinds of information it may be beneficial to hide the source.  For example, pure rhetoric.  My ability to judge whether it is convincing or not is based purely on the logical connections between premise and conclusion and my prior beliefs about the plausibility of the premises.  Knowing the author of the rhetoric provides no additional information.  And if, for psychological reasons, knowing the source biases my interpretation then I am strictly better off having it hidden from me.  (At least temporarily)

You will complain that by appealing to psychological biases I have departed from the rational choice framework.  But I think there is a useful distinction between rational choice, and rational information processing (if the latter even has any meaning.)  If I can be expected to choose my sources rationally then there is no role for serendipitous exposure to new sources, even if I make errors in processing information.  But rational choice together with (self-aware) processing errors can justify keeping the source hidden.

(Drawing by Stephanie Yee.)

Bandwagon effects are hard to prove.  If an artist is popular, does that popularity by itself draw others in?  Are you more likely to enjoy a movie, restaurant, blog just because you know that lots of other people like it to?  It’s usually impossible to distinguish that theory from the simpler hypothesis:  the reason it was popular in the first place was that it was good and that’s why you are going to like it too.

Here’s an experiment that would isolate bandwagon effects.  Look at the Facebook like button below this post.  I could secretly randomly manipulate the number that appears on your screen and then correlate your propensity to “like” with the number that you have seen.  The bandwagon hypothesis would be that the larger number of likes you see increases your likeitude.

It’s great, but what’s even greaterer is that they made a transcript of the whole thing and so what would take you an hour to listen to you can read in about 5 minutes.  I read it from start to finish.  Featuring appearances by Josh Gans, Valerie Ramey, Betsey Stevenson, Justin Wolfers and others.

The transcript is here.

You can follow the list here, including Tom Hubbard, David Besanko, Eran Shmaya, and Josh Rauh.  No Sandeep yet.

You know the show Iron Chef?  Someone should organize Iron Blogger.  You are the chairman, you assemble your Iron Bloggers, and each week you invite a challenger blogger.  The “secret ingredient” is a topic for the challenger and his chosen Iron Blogger to write about.  You appoint judges to evaluate the writing according to content, style, and originality.

How could it be that millions of users sign on to a service like Twitter and voluntarily impose upon themselves a constraint to talk in no more than 140 character dollops at a time?  Of course the answer is that they want access to the network, Twitter owns it and Twitter sets the rules.  But then the question is why is a restriction like that the blueprint for a successful network.

Here’s an analogy.  Imagine that you own a vacant lot where every weekend people meet to buy and sell stuff.  You don’t charge any entry fees and you don’t take a cut from any transaction, you simply want to be the most popular vacant lot in town.

Every seller who is there selling stuff contributes value to the market as a whole and you internalize that overall value.  But you and the sellers have a basic conflict of interest because they are maximizing their own profits and not the overall value of being in the market.  When they raise prices they extract surplus from the people willing to pay high prices and in the process reduce the surplus of people who don’t.

From your point of view the extracted surplus is just a transfer of value from one member of your club to another, and what you really care about is the lost value from the excluded sales.  So you will generally want lower prices than the sellers would set on their own.

Now think of each message in a social network as having two components:  information and self-promotion.  People follow you if you provide them with useful information.  And if your information is useful some would even be willing to wade through some self-promotion to get to your useful information.  But not all. The self-promotion is the price of the information and its a transfer of value because it costs the follower his attention and enhances the followee’s reputation.

From Twitter’s point of view the users are a bunch of tiny monopolists willing to exchange a little bit of overall surplus for a bit more of their own.  140 is like a price cap imposed by the owner of the vacant lot which boosts the information/self-promotion ratio of tweets.

The surprising thing is that some users aren’t just outright banned.

 

The New York Times paywall has gone up.  Many people I know are disgusted by the idea of paying for something that they’ve gotten used to getting for free.  Does the paywall make economic sense for the NYT?

A newspaper makes money both from paying customers who buy the paper (print or online) and from advertising revenue.   There is a tension between the two: If the newspaper charges customers, this reduces readership and hence advertising revenue.  It may make sense to give away the newspaper for free, maximize readership and extract profits from advertising.  In this scenario, the paywall might be a mistake, driving away readers and hence advertisers.

Online dissemination of news has other ramifications.  Many HuffPo “articles” are simply links to the NYT with some salacious or provocative headline pasted on.  People clicking through from HuffPo generate yet more readers and hence advertising revenue.  This gives the NYT extra incentive to produce interesting news stories go generate more links and profits.  But HuffPo also gets more readers and revenue because people know they can go there to get aggregated information from lots of sources.   HuffPo does not have to hire John Burns or David Sanger to go to dangerous places and do actual reporting.  They are free-riding off the work done by NYT reporters.  The NYT does not internalize the positive externality it exerts on HuffPo and other sites.  This effect leads to underinvestment in journalism by the NYT.

Should the NYT charge HuffPo to link to its stories?  If the extra readership and advertising revenue compensates the NYT for the positive externality it exerts on HuffPo, there is no issue. But if not, a payment from HuffPo to the NYT can increase profits for both firms by encouraging jointly optimal story production.  It is hard to tell if anything like this is part of the plan but it seems not?

We are entering a new world and we will see if it all collapses or changes the equilibrium.

In January I made a prediction and kept it a secret.  Now I can tell you what it was.

For fun I wanted to see how well I could predict the Economics PhD job market. As a simple test I tried to predict who would be selected for the Review of Economic Studies Tour, a kind of all-star team of new PhDs. Here was my predicted list

  1. Gharad Bryan (Yale)
  2. Matt Elliot (Stanford)
  3. Neale Mahoney (Stanford)
  4. Alex Torgovitsky (Yale)
  5. Glen Weyl (Harvard)
  6. Alex Wolitzky (MIT)
  7. Alessandra Voena (Stanford)
  8. Kei Kawai (Northwestern)

And here is the actual list (the RES Tour website is here.)

  1. Alex Wolitzky (going to Stanford?)
  2. Daniel Keniston (MIT, going to Yale)
  3. Mar Reguant (MIT, going to Stanford GSB)
  4. Kei Kawai (going to Princeton?)
  5. Alex Torgovitsky (coming to NU)
  6. Alessandra Voena (going to Chicago?)
  7. Peter Koudijs (Pompeu Fabra, going to Stanford GSB)

So I got 4 out of 8.  (There are usually 7 people, and I predicted 7 originally, but I updated it the next day adding Kawai to the list.  I had been so involved in recruiting students from other schools that I had completely forgotten about our own star student Kei Kawai and as soon as I remembered him I added him to the list.)

I previously blogged about Torgovitsky, Mahoney, and Koudijs.

You can see why I wanted to keep the prediction a secret until the market was over.  You can verify my prediction by cutting and pasting the text in this file and generating its unique SHA1 hash (a digital signature) with this web tool and cross check that it is the hash that I originally posted here, and that I tweeted here, and that is reproduced below.

f502acfb48395d6ab223ca30803f98b9bd6fd6ce

(Here is the original prediction before I added Kawai, and here is the hash for that one.)

I did this as an experiment to see how easy it is to predict job market outcomes.  At the time I made this prediction I had read the files of each of these candidates and interviewed most of them.  I didnt know where else they had interviews and I made the prediction before the stage of flyout interviews so I had little information about how their job market was going overall.

Getting half right is about what I expected.  To me this is evidence that the market is hard to predict even after having interviewed the candidates. In particular I take it as evidence against the cynical view that the market herds on certain candidates early in the process.  Indeed I would not have changed my prediction much even a week or two later when flyout schedules were in place.

Incidentally the Review Tour rosters say something about the strength of PhD programs.  Here’s a breakdown of the last 6 years and where the tourists received their PhDs:

MIT 12
Harvard 7
NWU 4
Yale 3
Stanford 3
Stanford GSB 2
BU, Duke, LSE, Michigan, NYU, Penn, Princeton, Ohio State, Stern, UPF, UCL —  1 each

Northwestern is doing very well!  (In addition to producing stars, we also do well hiring them.  3 from the tour in the past 6 years, including Alex Torgovitsky this year, an outstanding hire.)

Also I understand that a team is at work creating the web tool that I suggested here for creating and managing secret predictions.  If and when it hits I will announce it here.

With the new social-network aware webapp, Getupp:

Getupp is a neat webapp that helps you set your goals and share them publicly so you’re held accountable to the world. To make sure keep your commitments, Getupp can notify your Facebook friends if you break a commitment so you can be sure everyone will find out that you’re a slacker.

I have a personal commitment to write a blog post every day.  Each day that I fail I am going to write a blog post to let you know that I failed.

You probably heard about the Facebooks of China. Facebook and Twitter are blocked there and filling the vacuum are some homegrown social networks. Obviously the biggest issue with this in the particular instance of China is freedom of information and expression, so the thought experiment I will propose requires a little abstraction to focus on a separate issue.

Sites like Renren and Kaixin001 are microcosms of today’s changing China — they copy from the West, but then adjust, add, and, yes, even innovate at a world-class level, ultimately creating something unquestionably modern and distinctly Chinese. It would not be too grand to say that these social networks both enable and reflect profound generational changes, especially among Chinese born in the 1980s and 1990s. In a society where the collective has long been emphasized over the individual, first thanks to Confucian values and then because of communism, these sites have created fundamentally new platforms for self-expression. They allow for nonconformity and for opportunities to speak freely that would be unusual, if not impossible, offline. In fact, these platforms might even be the basis for a new culture. “A good culture is about equality, acceptance, and affection,” says Han Taiyang, 19, a psychology major at Tsinghua University who uses Renren constantly. “Traditional thinking restrains one’s fundamental personality. One must escape.”

Goods (like social networks) that have bandwagon effects create the greatest value when the size of the market is largest. But that same effect can cause convergence on a bad standard. One argument, very narrow of course, for trade barriers is to prevent that from happening. We allow each country to develop critical mass in their own standard in isolation before we reduce trade barriers and allow them to compete.

  1. Software agents are invading online poker sites and relieving the humans of their money.
  2. The New York Times web site will beat you at rock scissors paper because you are a predictable human. (mm:  Courtney Conklin Knapp)
  3. Tyler Cowen and his computer make each other better at chess.

Following up on Tyler, the suggestion is that there are gains from specialization in computer/human partnerships.  But it is not enough for Tyler and his computer to beat a computer.  Could Tyler and another human player (of strength comparable to his computer partner) do even better?

Now it is interesting to observe that the other comparison is not possible. Would a team of two computers (with strengths comparable to Tyler and his machine) do even better? How would two computers make a team?  If the two computers came up with different ideas how would they decide which one was better?

010101:  I think we should play Re1. I rate it +.30
1110011: I considered that move and at 22 ply I rate it at +0.05, instead I suggest we sac the Knight.
010101: I considered that move and at 22 ply I rate it at -1.8.
1110011: Here take a look at my analysis.
010101:  Yes I am aware of that sequence of moves, I already considered it.  It’s worth +0.05.
1110011: No, +0.30
010101:  No, +0.05

etc.  Any protocol for deciding which is the right analysis should already have been programmed into the original software.  Put differently, if there was a way to map the pair of evaluations (.3,.05) into a better evaluation y, then the position should already have been evaluated at y by each machine individually.

The only benefit of the two computers would be the deeper search in the same amount time.  That is, a two computer team is just two parallel processers but exactly the same evaluation heuristic applied to the final position searched. In that sense the human’s unique ability is to understand when to switch heuristics.  (But why can’t this understanding be programmed into software?)

This was going to happen eventually.

Una advertencia: el lector desprevenido podrá suponer que el contenido de este artículo es irónico, exagerado o hasta apócrifo. Han sido recurrentes las ironías acerca de los efectos letales de los planes de ajuste que han impulsado e impulsan ciertos economistas. Marcelo Matellanes, el fallecido filósofo y economista, sostenía que a los economistas se les debería exigir, como a los médicos, el juramento hipocrático, pero con un detalle adicional: la mala praxis de los médicos tiene efectos más acotados que los programas de ajuste estructural que algunos economistas han puesto en práctica en las economías latinoamericanas. En otras palabras, los malos médicos matan de a uno; los malos economistas hacen un daño generalizado.

The article, in Spanish obviously, is here.  (Google translate is at your service.) My translation:  two evil economists from the center countries (??) named Sandeep Baliga and Jeffrey Ely have written a paper which demonstrates how to use torture optimally.  They, and all economists for that matter, should report at once for ethical reprogramming.

Gat grope:  Santiago Oliveros.

Bryan Caplan wonders whether economic theory is on the decline. Here are some signs I have noticed:

  1. Econometrica, the most theory-oriented of the top 4 journals has a well-publicized mission to publish more applied, general interest articles, and this is indeed happening.  This comes at the expense of pure theory as well as theoretical econometrics.
  2. The new PhD market was, on the whole, difficult for theorists this year.  Strong candidates from Yale, Stanford, NYU and Princeton were placed much lower than expectations, some without a job offer in North America as of yet.  As far as I can tell, there will be only two junior theorists hired at top 5 departments.

But there are many positive signs too

  1. Theorists have been recruiting targets for high-profile private sector jobs.  Michael Schwarz and Preston McAfee at Yahoo!, Susan Athey at Microsoft for example.  In addition the research departments in these places are full of theorists-on-leave.
  2. Despite some overall weakness, theory is and always has been well represented at the top of the junior market.  This year Alex Wolitzky, as pure a theorist as there is, is the clear superstar of the market.  Here is the list of invitees to the Review of Economics Studies Tour from previous years.  This is generally considered to be an all-star team of new PhDs in each year.  Two theorists out of seven per year on average.  (No theorist last year though.)
  3. In recent years, two new theory journals, Theoretical Economics and American Economic Journal:  Microeconomics, have been adopted by the leading Academic Societies in economics.  These journals are already going strong.
  4. Market design is an essentially brand new field and one of the most important contributions of economics in recent years.  It is dominated by theorists.

In my opinion there are some signs of change but, correctly interpreted, these are mostly for the better.  Decision theory, always the most esoteric of subfields has moved to the forefront as a second wave of behavioral economics.  Macroeconomics today is more heavily thoery-oriented than ever.  Theorists (and theory journals) are drawn away from pure theory and toward applied theory not because pure theory has diminished in any absolute sense, but rather because applied theory has become more important than ever.

Professor Caplan offers some related observations in his commentary:

…mathematicians masquerading as economists were never big at GMU, and it’s hard to see how they could do well in the blogosphere either.

I am sure he is not talking about Sandeep and me because we are just as bad at math as all of the other bloggers who pretend to be economists.  But just in case he is, I invite him to take a look around.  Finally,

My econjobrumors insider tells me that its countless trolls are largely frustrated theorists who feel cheated of the respect they think the profession owes them.  Speculation, yes, but speculation born of years of study of their not-so-silent screams.

He is talking about the people who anonymously post sometimes hilarious, sometimes obnoxious vitriol on that outpost of grad student angst known as EJMR. I wonder how he could possibly know the research area of anonymous posters to that web site? Among all the economists who feel cheated out of the respect that they think the profession owes them why would it be that theorists are the most likely to troll?

Whenever I teach the Vickrey auction in my undergraduate classes I give this question:

We have seen that when a single object is being auctioned, the Vickrey  (or second-price) auction ensures that bidders have a dominant strategy to bid their true willingness to pay. Suppose there are k>1 identical objects for sale.  What auction rule would extend the Vickrey logic and make truthful bidding a dominant strategy?

Invariably the majority of students give the intuitive, but wrong answer.  They suggest that the highest bidder should pay the second-highest bid, the second-highest bidder should pay the third-highest bid, and so on.

Did you know that Google made the same mistake?  Google’s system for auctioning sponsored ads for keyword searches is, at its core, the auction format that my undergraduates propose (plus some bells and whistles that account for the higher value of being listed closer to the top and Google’s assessment of the “quality” of the ads.)  And indeed Google’s marketing literature proudly claims that it “uses Nobel Prize-winning economic theory.”  (That would be Vickrey’s Nobel.)

But here’s the remarkable thing.  Although my undergraduates and Google got it wrong, in a seemingly miraculous coincidence, when you look very closely at their homebrewed auction, you find that it is not very different at all from the (multi-object) Vickrey mechanism.  (In case you are wondering, the correct answer is that all of the k highest bidders should pay the same price: the k+1st highest bid.)

In a famous paper, Edelman, Ostrovsky and Schwarz (and contempraneously Hal Varian) studied the auction they named The Generalized Second Price Auction (GSPA) and showed that it has an equilibrium in which bidders, bidding optimally, effectively undo Google’s mistaken rule and restore the proper Vickrey pricing schedule.  It’s not a dominant strategy, but it is something pretty close:  if everyone bids this way no bidder is going to regret his bid after the auction is over. (An ex post equilibrium.)

Interestingly this wasn’t the case with the old style auctions that were in use prior to the GSPA.  Those auctions were based on a first-price model in which the winners paid their own bids.  In such a system you always regret your bid ex post because you either bid too much (anything more than your opponents’ bid plus a penny is too much) or too little.  Indeed, advertisers used software agents to modify their standing bids at high-frequencies in order to minimize these mistakes.  In practice this meant that auction outcomes were highly volatile.

So the Google auction was a happy accident.  On the other hand, an auction theorist might say that this was not an accident at all.  The real miracle would have been to come up with an auction that didn’t somehow reduce to the Vickrey mechanism.  Because the revenue equivalence theorem says that the exact rules of the auction matter only insofar as they determine who the winners are.  Google could use any mechanism and as long as its guaranteed that the bidders with the highest values will win, that can be accomplished in an ex post equilibrium with the bidders paying exactly what they would have paid in the Vickrey mechanism.

The government in Egypt is cutting off communications networks, including mobile phones and the Internet.

The decision to get out and protest is a strategic one.  It’s privately costly and it pays off only if there is a critical mass of others who make the same commitment.  It can be very costly if that critical mass doesn’t materialize.

Communications networks affect coordination.  Before committing yourself you can talk to others, check Facebook and Twitter, and try to gauge the momentum of the protest.  These media aggregate private information about the rewards to a protest but its important to remember that this cuts two ways.

If it looks underwhelming you stay home, go to work, etc.  And therefore so does everybody who gets similar information as you.  All of you benefit from avoiding protesting when the protest is likely to be unsuccessful.  What’s more, in these cases even the regime benefits from enabling private communication, because the protest loses steam.

Now consider the strategic situation when you lines of communication are cut and you are acting in ignorance of the will of others.  The first observation is that in these cases when the protest would have fizzled, without advance knowledge of this many people will go out and protest.  Many are worse off, including the regime.

The second observation is that even in those cases when protest coordination would have been amplified by private communication, shutting down communication may nevertheless have the same effect, perhaps even a stronger one.  There are two reasons for this. First, the regime’s decision to shut down communications networks is an informed one.  They wouldn’t bother taking such a costly and face-losing move if they didn’t think that a protest was a real threat.  The inference therefore, when you are in your home and you can’t call your friends and the internet is shut down is that the protest has a real chance of being effective.  The signal you get from this act by the regime substitutes for the positive signal you would have gotten had they not acted.

The other reason is that this signal is public.  Everyone knows that everyone knows … that the internet has shut down.  Instead of relying on the noisy private signal that you get from talking to your friends, now you know that everybody is seeing exactly the same thing and are emboldened in exactly the same way.

It’s as if the regime has done the information aggregation for you and packaged it into a nice fat public signal.  This removes a lot of the coordination uncertainty and strengthens your resolve to protest.

Addendum: Tyler has some related observations.

I was sitting in a seminar and the guy was talking about unraveling in the labor market.  Someone asked a question whether it could happen in reverse. The speaker said “Do you mean raveling up?  Yes it is possible that there is raveling up.”

And I thought “Wait a minute, you don’t need the ‘up’ in ‘raveling up’ because surely the opposite of unraveling is just ‘raveling.’ ”  But then I realized that I have never heard that word used.  Unraveling, all the time.  Raveling, never.  So I went for the dictionary.  Three dictionaries in a row gave me a definition of raveling something like this.

Ravel. verb.  To disentangle.  Unravel.

What?  To ravel means to unravel??  But then what does unravel mean?

Unravel. verb.  To untangle.

So two very strange things now.  First, unravel has an independent definition (in terms of other words) but ravel, the un-prefixed word, is defined in terms of the prefixed unravel.  Second, ravel is defined to mean unravel!

My colleague Rakesh Vohra thought the good old Oxford English Dictionary would save us from being swallowed up into the lexicographic Weezer-vortex, but alas (login with username trynewoed, password trynewoed.  works until Feb 5), not even the Queen can help:

1. To entangle or disentangle

(!) The word means A and also the opposite of A. Doesn’t it now follow that disentangle means the same as entangle? And isn’t there a theorem that once you allow a contradiction into a formal system you can make anything into a contradiction. So if we flip through enough pages of the OED eventually we can prove that True means False?

2. To become unwound, to fray; to unravel.

3. To disentangle, make plain or clear.

Yeah right.

(drawing:  Road of Life from www.f1me.net)

Illinois governor Pat Quinn is considering whether to sign into law a tax bill that includes a new tax on online retailers, the so-called Amazon Tax.  Until now, online transactions are not taxed in states where the retailer has no physical presence (with a few exceptions.)  The new measure would end this in Illinois, treating Amazon as an Illinois retailer so long as one of its online affiliates is based in the state.  (Every state has thousands of online affiliates.)

Amazon is responding by playing chicken.  From Presh Talwalker:

So Amazon is fighting back at Illinois with a threat. Amazon has emailed its commissioned affiliates the following message:

We regret to inform you that the Illinois state legislature has passed anunconstitutional tax collection scheme that, if signed by Governor Quinn, would leave Amazon.com little choice but to end its relationships with Illinois-based Associates. [emphasis mine]

The following logic seems to explain the motive. If Amazon ends its affiliate relationships in Illinois, then it would have no physical presence in the state, and hence it would get around the bill.

The email levies harsh criticism at Illinois and is meant to garner sympathy. In reality, the move is calculated and strategic.

Amazon is threatening all affiliates on purpose – even though it doesn’t have to. Here is an interesting tidbit the Chicago Tribune reported:

The bill applies only to affiliates that have at least $10,000 a year in revenue. But if large retailers, such as Amazon, cut off all affiliates in Illinois, it would end commission streams to small Web sites, such as bloggers, who might sell Amazon goods at their sites. Amazon could not be reached for comment.

Amazon is playing a classic retaliatory strategy. If Illinois wants to pass this law, then it will do everything to hurt the state and even otherwise innocent and small-time bloggers, who might decide its time to complain to Gov. Pat Quinn.

There’s more in Presh’s article here. (Amazon seems to understand reputation building because it carried through with its threat in Colorado when that state passed a similar measure.)

My view is that the threat is credible even ignoring reputation-building.  The lost revenue from sales tax would dwarf the losses from cutting off affiliates.

Made it to Brooklyn alive. I don’t see what the big deal is, some nice chap shoveled me a spot and even gave me a free chair!

From @TheWordAt.

Speaking of which, have you noticed the similarity between shovel-earned parking dibs and intellectual property law?  In both cases the incentive to create value is in-kind:  you get monopoly power over your creation.  The theory is that you should be rewarded in proportion to the value of the thing you create.  It’s impossible to objectively measure that and compensate you with cash so an elegant second-best solution is to just give it to you.

At least in theory.  But in both IP and parking dibs there is no way to net out the private benefit you would have earned anyway even in the absence of protection.  (Aren’t most people shoveling spaces because otherwise they wouldn’t have any place to put their car in the first instance? Isn’t that already enough incentive?)  And all of the social benefits are squandered anyway due to fighting ex post over property rights.

I wonder how many people who save parking spaces with chairs are also software/music pirates?

Finally, here is a free, open-source Industrial Organization textbook (dcd: marciano.)  This guy did a lot of digging and we all get to recline in his chair.

You can find it here, thanks to a reader Elisa for hunting it down. The core is paragraphs 43-112 (starting on page 27) which lay out the new rules. I will give some excerpts and my own commentary.

The regulations break down into 4 categories: transparency, no blocking, no unreasonable discrimination, and reasonable network management. Transparency is what it sounds like: providers are required to maintain and make available data on how they are managing their networks. The blocking and discrimination rules are the most important and the ones I will focus on.

No Blocking.

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non- harmful devices, subject to reasonable network management. (paragraph 63)

This is the clearest statement in the entire document. (Many phrases are qualified by the “reasonable network management” toss-off.  In the abstract that could be a troubling grey area, but it is pretty well clarified in later sections and appears to be mostly benign, although see one exception I discuss below.)  The no-blocking rule is elaborated in various ways:  providers cannot restrict users from connecting compatible devices to the network, degrading particular content or devices is equivalent to blocking and not permitted, and especially noteworthy:

Some concerns have been expressed that broadband providers may seek to charge edge providers simply for delivering traffic to or carrying traffic from the broadband provider’s end-user customers. To the extent that a content, application, or service provider could avoid being blocked only by paying a fee, charging such a fee would not be permissible under these rules. (paragraph 67)

No Unreasonable Discrimination

A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.

This rule is heavily qualified in the paragraphs that follow.  Here is my framework for reading these.  There are three typical ways a provider would discriminate:  differentially pricing various services (i.e. you pay differently whether you are accessing Facebook or YouTube), differentially pricing by quantity (i.e. the first MB costs more or less than the last), or differentially pricing by bandwidth (i.e. holding fixed the quantity you pay more if you want it sent to you faster, for example by watching HD video.)

The rules seem to consider some of these forms of discrimination unreasonable but others reasonable.  The clearest prohibition is against the first form of discrimination, by data type.

For a number of reasons, including those discussed above in Part II.B, a commercial arrangement between a broadband provider and a third party to directly or indirectly favor some traffic over other traffic in the broadband Internet access service connection to a subscriber of the broadband provider (i.e., “pay for priority”) would raise significant cause for concern. (paragraph 76)

Such a ban is clearly dictated by economic efficiency.  The cost of transmitting a datagram is independent of the content it contains and therefore efficient pricing should treat all content equally on a per-datagram basis.  This principle is the hardest to dispute and the FCC has correspondingly taken the clearest stand on it.

As for quantity-based discrimination:

We are, of course, always concerned about anti-consumer or anticompetitive practices, and we remain so here. However, prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. The framework we adopt today does not prevent broadband providers from asking subscribers who use the network less to pay less, and subscribers who use the network more to pay more.  (paragraph 72)

So tiered service by quantity is permitted.  Note that the wording given above is off the mark in terms of what efficiency dictates.  It is not quantity per se that should be priced but rather congestion.  A toll-road is a useful metaphor.  From the point of view of efficiency, the purpose of a toll is to convey to drivers the social cost of their use of the road.  When drivers must pay this social cost, they are induced to make the efficient decision whether to use the road by comparing it to their their private benefit.

The social cost is zero when traffic is flowing freely (no congestion) because they don’t slow anybody else down. So tolls should be zero during these periods.  Tolls are positive only when the road is utilized at capacity and additional drivers reduce the value of the road to others.

So “lighter users subsidizing heavier users” sounds unfair but its really orthogonal to the principles of efficient network management.  In an efficiently priced network the off-peak users are subsidized by the peak-users regardless of their total amount of usage.  And this is how it should be not because of anything having to do with fairness but because of incentives for efficient usage.

There is one big problem with this toll-road metaphor when it comes to the Internet however.  The whole point of peak-pricing is to signal to drivers that its costly now to drive.  But when you are downloading content from the Internet things are happening too fast for you to respond to up-to-the-second changes in congestion.  It is just not practical to have prices adjust in real time to changing network conditions as dictated by peak-load pricing.  And without users being able to respond to congestion pricing their purpose would not be served by calculating prices ex post and sending users the bill at the end of the month.

Given this, it could be argued that a reasonable proxy is to charge users by their total usage.  It’s a reasonable approximation that those with greater total usage are also most likely to be imposing greater congestion on others.  And the FCC rules permit this.  (Note that in particular, what is implied by tiered pricing as a proxy for congestion pricing is not a quantity discount but in fact a quantity surcharge. The per-datagram price is larger for heavier users.)

Discrimination by bandwidth is not directly addressed.  It is therefore implicitly allowed because paragraph 73 reads “Differential treatment of traffic that does not discriminate among specific uses of the network or classes of uses is likely reasonable. For example, during periods of congestion a broadband provider could provide more bandwidth to subscribers that have used the network less over some preceding period of time than to heavier users.”

But the following paragraph comes from the section on Network Management.

Network Congestion. Numerous commenters support permitting the use of reasonable network management practices to address the effects of congestion, and we agree that congestion management may be a legitimate network management purpose. For example, broadband providers may need to take reasonable steps to ensure that heavy users do not crowd out others. What constitutes congestion and what measures are reasonable to address it may vary depending on the technology platform for a particular broadband Internet access service. For example, if cable modem subscribers in a particular neighborhood are experiencing congestion, it may be reasonable for a broadband provider to temporarily limit the bandwidth available to individual end users in that neighborhood who are using a substantially disproportionate amount of bandwidth. (paragraph 91)

At face value it gives well-intentioned providers the ability to manage congestion.  But there doesn’t seem to be a clear statement about how this ability can be integrated with pricing.  Can providers sell “managed” service at a discount relative to “premium” service?  One re-assuring passage emphasizes that network management practices must be consistent with the no-discrimination-by-data-type mandate.  So for example, congestion caused by high-bandwidth video must be managed equally whether it was from YouTube or Comcast’s own provided video services.

Finally, the rules permit what’s called “end-user controlled” discrimination, i.e. 2nd degree price-discrimination.  This means that broadband providers are permitted to offer an array of pricing plans from which users select.

Maximizing end-user control is a policy goal Congress recognized in Section 230(b) of the Communications Act, and end-user choice and control are touchstones in evaluating the reasonableness of discrimination.215 As one commenter observes, “letting users choose how they want to use the network enables them to use the Internet in a way that creates more value for them (and for society) than if network providers made this choice,”and “is an important part of the mechanism that produces innovation under uncertainty.”216 Thus, enabling end users to choose among different broadband offerings based on such factors as assured data rates and reliability, or to select quality-of-service enhancements on their own connections for traffic of their choosing, would be unlikely to violate the no unreasonable discrimination rule, provided the broadband provider’s offerings were fully disclosed and were not harmful to competition or end users.

While this paints a too-rosy picture of the consumer-welfare effects of 2nd degree price-discrimination (it typically makes some consumers worse off and can easily make all consumers worse off) it seems hard to imagine how you can allow the kind of tiered pricing already discussed and not allow consumers to choose among plans.

So the FCC is allowing broadband providers to rollout metered service, possibly with quantity premiums, and there is a grey area when it comes to bandwidth restrictions.  These are consistent with, but not implied by efficient pricing, and of course we are putting them in the hands of monopolists, not social planners.  They certainly fall short of what net-neutrality hawks were asking for but it was wishful thinking to imagine that these changes were not coming.

I think that the no-blocking and no unreasonable discrimination rules are the core of net-neutrality as an economic principle and getting these is more than sufficient compensation for tiered pricing.

Final disclaimer:  everything above applies to “fixed broadband providers” like cable or satellite.  The FCC’s approach to mobile broadband can be summarized as “wait-and-see.”

Today the commissioners of the FCC will meet to vote on a new proposed policy concerning Net Neutrality.  It is expected to pass.  Pundits, policymakers and media of all predispositions are hyperventilating over the proposal but none link to it and I can’t find the actual document anywhere.  Does anybody have a link to it?

Throw a party.  And use a system like evite.com to handle the invitations. There is a typical pattern to the responses over time.  You will have an initial flurry of yeses and regrets followed by a long period of silence punctuated by sporadic responses which continues to the days before the party.  Then there is a final flurry and that is when you learn if your friends are real friends.

Because people come to your party for one of two reasons.  Either they like you or they just feel obligated for reasons like you are an important co-worker or they don’t want to hurt your feelings, etc.  Think of how these two types of people will handle your invitation.

An invitation is an option that can be exercised at any time before the date of the party.  The people who did not respond immediately are waiting to decide whether to exercise the option.  If she’s a true friend then this is because she has a potential conflict that would prevent her attending.  She is waiting and hoping to avoid that conflict.  When she is sure there is no conflict she will say yes.

The other people are hoping for an excuse not to come.  Once they get a better offer, manage to schedule a conflicting business trip, or otherwise commit themselves, they will send their regrets.

In both cases, when the party is imminent, the option value of waiting is gone. Those who want to come but haven’t gotten out of their conflict give up and send their regrets. Those who hoped to get out of it but failed to come up with a believable excuse give up and accept.

So, a simple measure of how much your friends like you is the proportion of acceptances that arrive in the final days.  Lots of acceptances means you better set aside a few extra drinks for yourself.

Because communication requires both a talker and a listener and it takes time and energy for the listener to process information.  So it may be cheap to talk but it is costly to listen.

But then the cost of listening implies that there is an opportunity cost to everything you say.  Because you can only say so much and still be listened to. They won’t drink from a firehose.

When you want to be listened to you have an incentive to ration what you say, and therefore the mere fact that you chose to say something conveys information about how valuable it was to you to have it heard.  There is no babbling because babbling isn’t worth it.

I also believe that this is a key friction determining the architecture of social networks.  Who talks and who listens to whom?  The efficient structure economizes on the cost of listening.  It is efficient to have a small number of people who specialize in listening to many sources then selectively “curating” and rebroadcasting specialized content. End-listeners are spared the cost of filtering.  The economic question is whether the private and social incentives are aligned for someone who must ration his output in order to attract listeners.

The District of Columbia is testing a system to allow overseas military personnel submit absentee electronic ballots via the internet.  Obviously security is a major concern and the followed a suggestion often made by the security community to open the system to the public and allow white-hat hackers to try and find exploits.  Here is the account of one team who participated and found a vulnerability within 36 hours.

By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user

As a result, deployment of the system has been delayed.

This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct.

But it could have turned out differently.  If a black-hat got there first, they could fix the vulnerability after first leaving themselves a backdoor.  Then the test comes out looking like a success, it goes live, and …

You can subscribe to a service and receive calls reminding you that you are awesome (ht MR).

You can probably think of people who would buy this service thinking it will bolster their self-esteem.  You might even imagine that you yourself would get a little boost from having someone call you personally and tell you that you rock.  But you probably think that this is leveraging some kind of behavioral, kludgy, semi-rational wiring in your personality and that you would quickly get de-sensitized to it.

But I disagree.  I think that it would be a valuable motivator even for the most hyper-rational among us. Because it’s not a trick at all but really just a way to preserve mindsets over time.  Suppose I tell you about something great I did.  Then later on, when I am about to take on some challenge, like let’s say I am about to give a big lecture to an intimidating audience, you call me and remind me of the great thing I did.  And you add your own interpretation of why it was great and how it shows that I am awesome.  I don’t need to believe anything about your motivations, your reminder restores my brain to the state it was in when I myself was thinking about how great I am and why.  And if your added color convinces me that you honestly agree with me then all the better.

Simply “writing it down” or memorizing the state of mind is not a perfect substitute.  At a very minimum this is simply based on cost-minimization.  Someone else is doing the remembering for me and that is worth something.  But it’s even more than that.

If you have been following me it will come as no surprise that I have no trouble at all remembering what an stupendous guy I am and all the super-amazing feats of astounding splendifery I have accomplished in my life.  Yet even with that overflowing supply of memories of greatness, I still get nervous in the face of a challenge.  When that happens I have my daughter repeat something she once said to me at a minor moment of greatness: “you’re so smart daddy.”  The memory of that moment is imprinted on the sound of her voice.  That sound hooks into the vivid edges of my direct experience of the event.  Immediately it’s “oh yeah, that’s how it’s done” and my perspective on the situation is totally new.  And yet on the surface all she is doing is duplicating a memory that I had in there already.

Daughters are great, and not just for fueling your ego, but they cost more than $40 a month.  By comparison, Awesomeness Reminders look like a pretty good deal.

If Twitter bans the sale of usernames then they take away any incentive to squat.  But is the commitment credible?

While Twitter tries to work out how to make money, a Spaniard has sold his username on the site for a six-figure sum.

In 2007 Israel Meléndez set up a Twitter account under his first name. This year he was approached by the state of Israel, which wanted to buy @Israel from him for a quantity of dollars that, he told Spain’s Público newspaper, included “five zeroes”.

The sale went through despite Twitter’s stated policy of preventing username squatting and Meléndez, who runs adult websites for a living, said Twitter itself had advised the Israeli government on how this could be done.

“All the business of getting in contact with Twitter was done by them [Israel],” Meléndez said. “I never saw any emails [between them] and Twitter never contacted me, but if the @Israel account is open and working I imagine it means that Twitter had no problem with the transaction.”

Have you seen these?  They mysteriously lurk at the top right of miscellaneous web pages on the Harvard Econ department web site.  Like here, look Philippe Aghion’s page has brains.  Mankiw?  brains. And there is no caption or explanation given.  I started to think that it was there as some kind of experiment like those guys in gorilla suits that run across the screen when you’re supposed to be counting basketballs?  So I am here to say that I am not blind to these brains.  And I see the word juice there.  You can’t slip anything by me.

Jeff’s Twitter Feed

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,260 other followers

Follow

Get every new post delivered to your Inbox.

Join 1,260 other followers