A new joint paper with Alex Frankel and Emir Kamenica. The talk begins with tennis, the discussion of American Idol begins at 12:14, how to write a mystery novel is at 15:51, the M. Night Shamyalan dilemma is at 17:32, the ESPN Classic dilemma is at 18:50, and the optimal sporting contest is at 28:37.

### Top Posts

- Prisoner's Dilemma Everywhere: Surge Pricing And The Uber Driver Strike
- Is It Just Your Imagination or Do All Radio Stations Play Ads at the Same Time?
- Pricing Bareback
- What is the Recommended Serving Temperature for Guinness?
- Golden Balls Solved?
- Coke For A Nickel
- The Peacock's Tail
- Kellogg/NU Nobel Economics Predictions 2012
- Measuring Media Bias Using Movie Reviews
- How To Open A Bag of Charcoal

### Tags

art art of office politics banana seeds blog books boston california chicago coffee computers crime current events decision-making economics education evolution family financial crisis food and wine friends funny game theory incentives iPhone kludge language law marriage maths movies music obama politics psychology publishing sandeep has bad taste sanitation sport statistics suicide teaching terrorism the web tomatoes travel TV vapor mill war winter writing### Subscribe via RSS

### Jeff’s Twitter Feed

- You give a True/False exam with 100 questions and one student gets every single question wrong. What grade do you give? 3 days ago
- RT @heatherchristle: A lot of people don’t know this, but if you are attempting to make a to-do list, and the first two pens you try are ou… 6 days ago
- RT @Mylovanov: Our trip to the US is over. On our long way of 38 hours from DC to Kyiv. Through 4 airports and 3 countries. The trip has be… 1 week ago
- RT @heatherchristle: What I like about the word "upon" is how it hints at an act of placement and feels light in its work of arrangement, w… 1 week ago
- RT @page_eco: It’s getting real! I can unveil the cover of “Optimally Irrational”. 👇 Available before the end of the year at @CambridgeUP.… 2 weeks ago

Join 2,153 other followers

## 8 comments

Comments feed for this article

October 18, 2012 at 6:29 am

AnonymousGreat paper and great talk. As an aside: is there a place to get all the talks from the Istanbul meeting?

October 18, 2012 at 8:54 am

jeffI don’t think they are available yet, but I do think they will be eventually

October 18, 2012 at 2:28 pm

Sune Kristian JakobsenThanks for a very interesting talk. I saw your post this morning, and I was really looking forward to see the talk (and wasn’t disappointed). I’ve got 3 different comment/questions:

1) You mention that if use just say the true state of the world right away, you don’t get much suspense and surprise. Yet, when people in the real world have to tell someone some bad news (e.g. tell them a serious diagnosis or tell them that someone they love has died) we will usually try to minimize the surprise by first saying something like “I have some bad news for you”, rather than just blurting out the bad news. Do you have a good explanation for that?

2) I once tried to think about how to design a game if you wanted to make it as good at determining the best player as possible. Say Alice and Bob plays a game and you know that Alice wins each point with some unknown probability p (so all the points are independent). Let v(p) the probability that Alice wins the game given that she wins each point with probability p. You want to design a fair game (symmetric between Alice and Bob) such that the expected number of points needed is not too large, but at the same time it is good at testing if p>1/2 (e.g. formalized by asking to maximize v'(1/2)). Is there a trade off between making a game that finds the best player and a game that is surprising and have a lot of suspense?

3) I study information theory, so I was a bit surprised that you could make a talk about surprise without mentioning entropy! If I should define suspense, I would have use the mutual information I(T:S_k|S_1,…S_k-1) between the result (or true state) T and the next signal S_k given all the previous information S_1,…S_k-1. Similarly, to measure the surprise I might have used the Kullback-Leibler divergence D(C||P) between the previous probability distribution P and the current distribution C (that is, distribution after the surprise). Do you think that would give interesting differences in the conclusions?

Sorry about all these questions. I hope you find at least one of them interesting.

October 19, 2012 at 3:58 am

Alex F1) We can come up with a lot of explanations for why this might be, but the short answer is that it’s not in our model. We take as a primitive that we’re looking at a situation where you get entertainment — as opposed to pain or boredom — from learning the outcome over time. One relevant factor is that if you need to learn information to make a decision, then you probably want to know as soon as possible and entertainment isn’t important. That’s why we focus on noninstrumental information in the paper. Second, there seems to be an interaction with the “stakes.” If you don’t care about something at all — if you have no investment in the outcome — you probably get no entertainment from it. So you want to watch sports games for a team you follow, not for random teams you’ve never heard of. And if you’re too invested in the outcome — the doctor is telling you whether you have cancer — then suspense might be painful instead of pleasurable. This raises interesting issues in settings like gambling, where you might choose your own stakes.

2) Let me first ignore your question and tell you a bit of what we’ve done. Suppose we *know* p in advance. If we want to maximize suspense or surprise in a game that’s first to N points, or a playoff series that’s the first to win N games, what N should we choose? Well, the bigger is N, the slower we reveal information, and slow information revelation gives more surprise. But there’s a tradeoff — we also get rid of the aggregate uncertainty, which reduces surprise. If you beat me in 70% of games, then I still have a chance of winning a best of 1 or a best of 3 series. I have no chance of winning a best of 99 series. It turns out that the closer the winning probability is to 50%, the greater N should be. For p=80%, we play a single game. For p=70%, we do a best of 5. If there are no other costs limiting the number of games, then for p=50% we’d approach infinitely many games.

OK, now back to your question. Maximizing the likelihood of picking the better team just means taking N to be as large as possible. It turns out that when we aren’t sure about p but the support contains 50%, surprise and suspense are also maximized by taking N to be as large as possible (although it grows very, very slowly when there’s not much probability about 50%). Of course, surprise and suspense aren’t necessarily monotonic in N when there’s uncertainty over p — when we’re almost sure p is about 70%, we have a local maximum at a best of 5 series. Then 7 is worse, 9 is even worse, but maybe we catch back up at 9999 (or whatever).

In terms of your specific proposal of how to measure the tradeoff of game length vs. test power, and how to compare that to the tradeoff of length vs. suspense, I have to think more about that.

3) By your definitions the suspense is equal to the expected surprise, correct? So the problem of choosing an information policy to maximize expected surprise would be the same as choosing one to maximize expected suspense. This gives us a single policy to consider. In fact, we have worked this out (it’s not in the paper), and qualitatively we get the same result as in our Suspense-maximizing policy.

The key is that K-L divergence and variance both share the mathematical property that the expected sum from *any* fully revealing sequence of signals is equal to a constant. So to maximize (our suspense / your proposed measures), we want to dole this out evenly over time — 1/T of this in each of T periods. This is the property we use to get all of the other features of optimal suspense. (Numerically, using K-L divergence instead of variance gives somewhat faster information revelation — we move towards the edges more quickly).

***

Thanks for your interest in the paper, and let one of us know if you have any followup questions!

October 19, 2012 at 5:27 am

Sune Kristian JakobsenThanks for the answers. In 2 I was also thinking about more general games. E.g. you could play until one player was leading with 2 points, or you could play best of 7 except that you stop if the same player wins the 3 first points. So a game would be defined by a set of stopping states, rather than just a number N.

3) Yes you are right, I should probably have noticed that.

I’m not really sure I agree with the definition of surprise, mostly because I don’t agree with the results about what kind of game is most (and least) surprising. But I’m not sure how to define it differently. Maybe it should be change in the expectation of the result divided by the suspense (or something like that: increasing in what you call surprise and decreasing in suspense). A surprise is most effective when you don’t expect to be surprised.

That would explain the “I have some bad new for you”, in cases where it is not obvious what the bad news is: It makes the last part of the information expected and thus less surprising. It would also explain why best of 1001 coin flips does not seem surprising: Each time you know exactly how much the result is going to chance expectation of the outcome.

October 19, 2012 at 3:55 pm

Alex FFirst, I wouldn’t toss aside our result that surprising games have jumpy belief paths that look approximately Brownian. Think about, say, basketball — it’s a very popular sport! There you’d get belief paths that really do look Brownian — small changes up or down every 30 seconds, no possibility of any individual big jumps until the very last few seconds. Tennis belief paths also look more like our suspense solution than our surprise one (aside from a few big ones, most points are small and don’t affect the game very much… but they add up). Loosely speaking, by our definitions soccer is a very suspenseful game (rare big belief jumps) and basketball is a surprising one (frequent small jumps).

We’ve played around with the idea of surprise=(belief change / standard deviation), but it doesn’t give anything interesting. The problem is that you can scale all belief changes down by a factor of a million, and it doesn’t change the new measure of surprise! Revealing essentially no information still gives you lots of surprise…

October 22, 2012 at 4:10 am

Marvel figurineI just finished watching the whole discussion. The American Idol part was the best. It helped me a lot.

December 5, 2012 at 1:51 am

casino taxHeya! I understand this is kind of off-topic however I

had to ask. Does building a well-established blog like yours require a lot of work?

I am brand new to blogging but I do write in my journal

every day. I’d like to start a blog so I can easily share my experience and views online. Please let me know if you have any recommendations or tips for brand new aspiring blog owners. Thankyou!