You are currently browsing the tag archive for the ‘publishing’ tag.
People complain that American mainstream media are becoming more and more polarized. There is a tradition in American journalism that the journalist should be objective and report the facts without judgment. Opinion pieces and Editorials are relegated to the back pages.
Nowadays those standards are eroding. Fox News, MSNBC, and CNN have discernible biases but still pander to the idea that they provide objective journalism. Meanwhile there is the perception that this trend is degrading the quality of information.
From a narrow perspective that may be true. I learn less from Fox News if they selectively report information that confirms the preconceptions of their audience. But media bias makes the media as a group more informative, not less.
Suppose I have a vast array of media sources which are scattered across the left-right spectrum. When a policy is being debated I look at all of them and find the pivotal outlet: all those to the left of it are advocating the policy and all those to the right are opposed. Different policies will have different cutoff points, and that cutoff point gives me a very simple and informative statistic about the policy. If the range is more narrow or more sparsely distributed this statistic is simply less informative.
Another way of saying this is that there is social value from having advisors with extreme biases. When I am thinking about a policy that I am predisposed to like, I learn very little from an unbiased source but I learn a lot if a source with my bias is opposed to the policy or a source with the opposite bias is in favor of it. It must be especially good or bad for these extremists to go against bias.
On E-book collusion:
Once Apple made it known it would accept agency pricing (but not selling books at a higher price than other retail competitors), the publishing companies didn’t have to act in concert, although one of them had to be willing to bell the very large cat called Amazon by moving to the agency model.
I’ve long had a personal hypothesis — not based on any inside information, but simply my own read on the matter, I should be clear — that the reason it was Macmillan that challenged Amazon on agency pricing was that Macmillan is a privately held company, and thus immune from being punished short-term in the stock market for the action. Once it got Amazon to accept agency pricing, the other publishers logically switched over as well. This doesn’t need active collusion; it does need people paying attention to how the business dominoes could potentially fall.
Again, maybe they all did actively collude, in which case, whoops, guys. Stop being idiots. But if they did not, I suppose the question is: At what point does everyone knowing everyone else’s business, having a good idea how everyone else will act, and then acting on that knowledge, begin to look like collusion (or to the Justice Department’s point, activelybecome collusion)? My answer: Hell if I know, I’m not a lawyer. I do know most of these publishers have a lot of lawyers, however (as does Apple), and I would imagine they have some opinions on this.
John Scalzi is an author, blogger, and apparently a pretty good economist too. Read the whole thing.
So there was this famous experiment and just recently a new team of researchers tried to replicate it and they could not. Quoting Alex Tabarrok:
You will probably not be surprised to learn that the new paper fails to replicate the priming effect. As we know from Why Most Published Research Findings are False (also here), failure to replicate is common, especially when sample sizes are small.
There’s a lot more at the MR link you should check it out. But here’s the thing. If most published research findings are false then which one is the false one, the original or the failed replication? Have you noticed that whenever a failed replication is reported, it is reported with all of the faith and fanfare that the original, now apparently disproven study was afforded? All we know is that one of them is wrong, can we really be sure which?
If I have to decide which to believe in, my money’s on the original. Think publication bias and ask yourself which is likely to be larger: the number of unpublished experiments that confirmed the original result or the number of unpublished results that didn’t.
Here’s a model. Experimenters are conducting a hidden search for results and they publish as soon as they have a good one. For the original experimenter a good result means a positive result. They try experiment A and it fails so they conclude that A is a dead end, shelve it and turn to something new, experiment B. They continue until they hit on a positive result, experiment X and publish it.
Given the infinity of possible original experiments they could try, it is very likely that when they come to experiment X they were the first team to ever try it. By contrast, Team-Non-Replicate searches among experiments that have already been published, especially the most famous ones. And for them a good result is a failure to replicate. That’s what’s going to get headlines.
Since X is a famous experiment it’s not going to take long before they try that. They will do a pilot experiment and see if they can fail to replicate it. If they fail to fail to replicate it, they are going to shelve it and go on to the next famous experiment. But then some other Team-Non-Replicate, who has no way of knowing this is a dead-end, is going to try experiment X, etc. This is going to continue until someone succeeds in failing to replicate.
When that’s all over let’s count the number of times X failed: 1. The number of times X was confirmed equals 1 plus the number of non-non-replications before the final successful failure.
Giving the content away for free publicizes the event and adds to the cache of (and willingness to pay for) the actual event. Also,
Anderson did not stop there. He opened up not only the TED talks themselves but the TED name. TEDx are events that can be put on by pretty much anyone. You need a license and have to do a good job (there’s no automatic renewal of the license), but nearly anyone can pitch in. This is literally a freeing up of the concept “ideas worth spreading” to allow anyone to select what those ideas are. So long as you follow a few simple rules — a talk format, some video, and no ads or other commercial tags — you can host a TEDx event. And there are now hundreds of these each year. What is more, TED regularly features talks from these on the site, so they act as feeder for TED publishing.
That’s from Josh Gans, more here.
Not even your thought experiments are safe.
Saul Kripke resigned yesterday from his position as Distinguished Professor of Philosophy at the CUNY Graduate Center. While similar allegations have been circulating in unpublished form for years, a team of philosophers from Oxford University has just released a damning report claiming that they were systematically unable to reproduce the results of thought experiments reported by Kripke in his groundbreaking Naming and Necessity.
(…) The report, forthcoming in Philosophical Studies, claims that 74% of the book’s thought-experimental results could not be reproduced using the standard philosophical criteria for inter-researcher agreement. A second version of the analysis, employing a generous application of the principle of charity, still left 52% of the results unverified.
As reported by Josh Gans:
I was directed to a paper by Mark McCabe and Chris Snyder (here is what used to be the link). It was a paper published in the BE Press journal Economic Analysis and Policy and it was a model of academic journal prices. To my surprise, I couldn’t access it. It was subscriber only. The reason: BE Press’s journals had been acquired by de Gruyter and obviously they had changed the policy.
Indeed I checked, and I cannot access my own article unless I am prepared to pony up $45. The moral of the story is if you think you are publishing in an Open Access journal but your article is not licensed to the public, the publisher has no commitment not to eventually sell it to somebody else who will not honor the original deal. I think the editor of that journal should resign.
Note well: Theoretical Economics is a true Open Access journal. Published articles are publicly licensed under Creative Commons and nobody has the right to close access ever, nor can they acquire that right. It also happens to be the top field journal in economic theory.
The way it works now is you write a paper then you send it to a journal and they review it and decide whether to publish it. The basic unit is the paper. What if we made the author the basic unit? Instead of inviting submissions, Econometrica invites applications for the position of author. Some number of authors are accepted and they can write whatever they want and have it published in Econometrica. The term would be temporary, maybe 1 year.
Wouldn’t it be wonderful to just write the paper you want to write, not the paper that the referees want you to write? The quality of papers would unambiguously increase. After all, your acceptance is a done deal, anything you write will be published, why bother writing anything less than the most interesting idea that is currently on your mind.
Quality control is achieved by rotating in the authors currently writing the most interesting stuff. Once the current slate of authors is chosen, there is no need anymore for referees or editors. But if you want peer review, you can have that too. Anyone wishing to prepare a referee report is invited to do so, they can even do it anonymously if they want and even make it open to the public. The journal might even want to append the reports onto the published paper.
Come to think of it, these journals already exist: blogs. Cheap Talk invites you to apply to be an
author guest blogger. (Past and current holders of this position include Roger Myerson, Lones Smith and Jeroen Swinkels.)
He is the editor of the Quarterly Journal of Economics.
Once a paper is allocated the assigned editor typically reads the paper within 1 day and decides whether to send the paper out to referees or to “desk reject” it. Since we receive over 1,400 submissions and only publish around 40 to 44 papers a year, we need to be tough in the initial screening and only send a paper out to referees if it has great promise to be a significant contribution. This past year we desk rejected 62 percent of the new submissions (which still means that over 500 were sent to referees). I am the “softy” among QJE editors desk rejecting “only” 47 percent of new submissions vs. 70 percent by my co-editors. I also try to provide authors with some brief feedback on their papers and the rationale for the decision even in the case of desk rejections.
Some topics evolve by occasional big news events interspersed by long periods of little or no news. The public reacts dramatically to the big news events and seems to ignore the slow news.
For example, a terrorist attack is followed by general paranoia and a tightening of security. But no matter how much time passes without another attack, there never seems to be a restoration of the old equilibrium. News is like a ratchet with each big reaction building directly upon the last, and the periods in-between only setting the stage for the next.
The usual way to interpret this is an over-reaction to the salient information brought by big news events, and a failure to respond to the subtle information conveyed by a lack of big news. We notice when the dog barks but we don’t notice when it doesn’t.
But even a perfectly rational and sophisticated public exhibits a news ratchet. That’s because there is a difference between big news and small news in the way it galvanizes the public. Large changes in policy require a coordinated movement by a correspondingly large enough segment of the population motivated to make the change. Individuals are so motivated only if they know that they are part of a large enough group. Big events create that knowledge.
During the slow news periods all of these individuals are learning that those measures are less and less necessary. But that learning takes place privately and in silence. Never will enough time pass that everyone can confidently conclude that everyone else has confidently concluded that …. that everyone has figured this out. So there will never be the same momentum to undo the initial reaction as there was to inflame it.
It pays $72,000 per year and comes with only two requirements, one is flexible and one is not:
At first glance, Robert Kirshner took the e-mail message for a scam. An astronomer at King Abdulaziz University (KAU) in Jeddah, Saudi Arabia, was offering him a contract for an adjunct professorship that would pay $72,000 a year. Kirshner, an astrophysicist at Harvard University, would be expected to supervise a research group at KAU and spend a week or two a year on KAU’s campus, but that requirement was flexible, the person making the offer wrote in the e-mail. What Kirshner would be required to do, however, was add King Abdulaziz University as a second affiliation to his name on the Institute for Scientific Information’s (ISI’s) list of highly cited researchers.
Imagine you discover a lost manuscript. You read it and it has a profound effect on you. You want as many people as possible to discover it and be affected as you were.
Publishers tell you that there is no market for re-discovered literature. But a big publisher is required for the book to have the scale of distribution it deserves.
After a while you see the solution. This is a lost manuscript and nobody would know if you were to put your own name on it, market it as something brand new and get all the buzz that would come from the reviews and best seller lists.
Would you do it? Would you condemn someone who did?
The live recording finds the Imposters in rare form, while the accompanying motion picture blueprints the wilder possibilities of the show, as it made its acclaimed progress across the United States throughout the year.
Unfortunately, we at http://www.elviscostello.com find ourselves unable to recommend this lovely item to you as the price appears to be either a misprint or a satire.
All our attempts to have this number revised have been fruitless but rather than detain you with tedious arguments about morality, panache and book-keeping – when there are really bigger fish to filet these days – we are taking the following unusual step.
If you should really want to buy something special for your loved one at this time of seasonal giving, we can whole-heartedly recommend, “Ambassador Of Jazz” – a cute little imitation suitcase, covered in travel stickers and embossed with the name “Satchmo” but more importantly containing TEN re-mastered albums by one of the most beautiful and loving revolutionaries who ever lived – Louis Armstrong.
The box should be available for under one hundred and fifty American dollars and includes a number of other tricks and treats. Frankly, the music is vastly superior.
If on the other hand you should still want to hear and view the component parts of the above mentioned elaborate hoax, then those items will be available separately at a more affordable price in the New Year, assuming that you have not already obtained them by more unconventional means.
By now those means are in fact the conventional ones, but we get the point. Slouch slouch nimpupani.
From Ariel Rubinstein of course, here’s his answer to question 5:
Q5. I have already written 30 pages. I have repeated myself several times and my proofs are much longer than necessary. I have added uncertainty wherever I could and I have moved from a discrete case to Banach spaces. My adviser still says I hardly even have enough for a note. How long should my paper be?
If you don’t have a good idea, then keep going. Don’t stop at less than 60 single-spaced pages. Nobody will read your paper in any case so at least you have a chance to publish the paper in QJE or Econometrica.
If you have a really good idea, my advice is to limit yourself to 15 double-spaced pages. I have not seen any paper in Economics which deserved more than that and yours is no exception. It is true that papers in Economics are long, but then almost all of then are deathly boring. Who can read a 50-page Econometica paper and remain sane? So make your contribution to the world by writing short papers — focus on new ideas, shorten proofs to the bare minimum (yes, that is possible!), avoid stupid extensions and write elegantly!
I was working on a paper, writing the introduction to a new section that deals with an extension of the basic model. It’s a relevant extension because it fits many real-world applications. So naturally I started to list the many real-world applications.
“This applies to X, Y, and….” hmmm… what’s the Z? Nothing coming to mind.
But I can’t just stop with X and Y. Two examples are not enough. If I only list two examples then the reader will know that I could only think of two examples and my pretense that this extension applies to many real-world applications will be dead on arrival.
I really only need one more. Because if I write “This applies to X, Y, Z, etc.” then the Z plus the “etc.” proves that there is in fact a whole blimpload of examples that I could have listed and I just gave the first three that came to mind, then threw in the etc. to save space.
If you have ever written anything at all you know this feeling. Three equals infinity but two is just barely two.
This is largely an equilbrium phenomenon. A convention emerged according to which those who have an abundance of examples are required to prove it simply by listing three. Therefore those who have listed only two examples truly must have only two.
Three isn’t the only threshold that would work as an equilibrium. There are many possibilities such as two, four, five etc. (ha!) Whatever threshold N we settle on, authors will spend the effort to find N examples (if they can) and anything short of that will show that they cannot.
But despite the multiplicity I bet that the threshold of three did not emerge arbitrarily. Here is an experiment that illustrates what I am thinking.
Subjects are given a category and 1 minute, say. You ask them to come up with as many examples from that category they can think of in 1 minute. After the 1 minute is up and you count how many examples they came up with you then give them another 15 minutes to come up with as many as they can.
With these data we would do the following. Plot on the horizontal axis the number x of items they listed in the first minute and on the vertical axis the number E(y|x) equal to the empirical average number y of items they came up with in total conditional on having come up with x items in the first minute.
I predict that you will see an anomalous jump upwards between E(y|2) and E(y|3).
This experiment does not take into account the incentive effects that come from the threshold. The incentives are simply to come up with as many examples as possible. That is intentional. The point is that this raw statistical relation (if it holds up) is the seed for the equilibrium selection. That is, when authors are not being strategic, then three-or-more equals many more than two. Given that, the strategic response is to shoot for exactly three. The equilibrium result is that three equals infinity.
While I think my work sometimes serves important purposes, and that (sadly) I am probably better at blogging and running regressions than I am at more direct forms of assistance, perhaps some deeper reflection is in order.
Of course, what I actually do is say to myself, “well, at least I don’t work in finance.”
The brief moment of concern passes, and I turn back to dispassionately regressing death and destruction.
This looks like a big deal.
Prestigious US academic institution Princeton University will prevent researchers from giving the copyright of scholarly articles to journal publishers, except in certain cases where a waiver may be granted.
The new rule is part of an Open Access policy aimed at broadening the reach of their scholarly work and encouraging publishers to adjust standard contracts that commonly require exclusive copyright as a condition of publication.
I would guess that the waiver would be given routinely, but this is a step in the right direction. via Andrea Ortolani.
Now look, I am cool with “we” that means “one”, to celebrate the fact that the validity of mathematical statements is independent of the person who happens to claim them, as in “Dividing by , we get that the game admits an equilibrium”. But sometimes the automatic replacement of `I’ with `we’ garbles the meaning of the sentence. When I write “I call such a sequence of variables a random play”, the singular pronoun implies that this is not a universally recognized definition, but one that I have invented for the current paper. Change this “I” to “we”, as the journal did, and the implication is lost. And sometimes `we’ for `one’ is just ridiculous, as in “We review Martin’s Theorem in the appendix”. It is one thing to say that every intelligent creature recognizes that the game admits an equilibrium, and another thing to say that every intelligent creature reviews Martin’s Theorem in the appendix.
The New York Times paywall has gone up. Many people I know are disgusted by the idea of paying for something that they’ve gotten used to getting for free. Does the paywall make economic sense for the NYT?
A newspaper makes money both from paying customers who buy the paper (print or online) and from advertising revenue. There is a tension between the two: If the newspaper charges customers, this reduces readership and hence advertising revenue. It may make sense to give away the newspaper for free, maximize readership and extract profits from advertising. In this scenario, the paywall might be a mistake, driving away readers and hence advertisers.
Online dissemination of news has other ramifications. Many HuffPo “articles” are simply links to the NYT with some salacious or provocative headline pasted on. People clicking through from HuffPo generate yet more readers and hence advertising revenue. This gives the NYT extra incentive to produce interesting news stories go generate more links and profits. But HuffPo also gets more readers and revenue because people know they can go there to get aggregated information from lots of sources. HuffPo does not have to hire John Burns or David Sanger to go to dangerous places and do actual reporting. They are free-riding off the work done by NYT reporters. The NYT does not internalize the positive externality it exerts on HuffPo and other sites. This effect leads to underinvestment in journalism by the NYT.
Should the NYT charge HuffPo to link to its stories? If the extra readership and advertising revenue compensates the NYT for the positive externality it exerts on HuffPo, there is no issue. But if not, a payment from HuffPo to the NYT can increase profits for both firms by encouraging jointly optimal story production. It is hard to tell if anything like this is part of the plan but it seems not?
We are entering a new world and we will see if it all collapses or changes the equilibrium.
So leads us to the remarkable story of Imperial College’s self-effacing head librarian, pitted in a battle of nerves against the publisher of titles like the Lancet. She is leading Research Libraries UK (RLUK), which represents the libraries of Russell Group universities, in a public campaign to pressure big publishers to end up-front payments, to allow them to pay in sterling and to reduce their subscription fees by 15%. The stakes are high, library staff and services are at risk and if an agreement or an alternative delivery plan is not in place by January 2nd next year, researchers at Imperial and elsewhere will lose access to thousands of journals. But Deborah Shorley is determined to take it to the edge if necessary: “I will not blink.”
The article is here. Part of what’s at stake is the so called “Big Deal” in which Elsevier bundles all of its academic journals and refuses to sell subscriptions to individual journals (or sells them only at exorbitant prices.) Edlin and Rubinfeld is a good overview of the law and economics of the Big Deals.
Boater Bow: Not Exactly Rocket Science.
By asking a hand-picked team of 3 or 4 experts in the field (the “peers”), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.
…Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen’s kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.
That’s from neuroskeptic writing about an article that studies the peer-review process. I couldn’t tell you what Cohen’s kappa means but let’s just take the results at face value: referees disagree a lot. Is that bad news for peer-review?
Suppose that you are thinking about whether to go to a movie and you have three friends who have already seen it. You must choose in advance one or two of them to ask for a recommendation. Then after hearing their recommendation you will decide whether to see the movie.
You might decide to ask just one friend. If you do it will certainly be the case that sometimes she says thumbs-up and sometimes she says thumbs-down. But let’s be clear why. I am not assuming that your friends are unpredictable in their opinions. Indeed you may know their tastes very well. What I am saying is rather that, if you decide to ask this friend for her opinion, it must be because you don’t know it already. That is, prior to asking you cannot predict whether or not she will recommend this particular movie. Otherwise, what is the point of asking?
Now you might ask two friends for their opinions. If you do, then it must be the case that the second friend will often disagree with the first friend. Again, I am not assuming that your friends are inherently opposed in their views of movies. They may very well have similar tastes. After all they are both your friends. But, you would not bother soliciting the second opinion if you knew in advance that it was very likely to agree or disagree with the first on this particular movie. Because if you knew that then all you would have to do is ask the first friend and use her answer to infer what the second opinion would have been.
If the two referees you consult are likely to agree one way or the other, you get more information by instead dropping one of them and bringing in your third friend, assuming he is less likely to agree.
This is all to say that disagreement is not evidence that peer-review is broken. Exactly the opposite: it is a sign that editors are doing a good job picking referees and thereby making the best use of the peer-review process.
It would be very interesting to formalize this model, derive some testable implications, and bring it to data. Good data are surely easily accessible.
If a tree falls in a forest and no one is around to hear it, does it make a sound?
This old philosophical conundrum can be mapped into the dilemma facing the aging academic:
If I publish a paper and nobody reads it, teaches it or cites it, can it ever be a truly great paper?
As with all questions with no Platonic certitude, economists say: Let the market speak and tell us the answer.
Glenn Ellison has studied a more serious version of my question in his paper “How Does the Market Use Citation Data? The Hirsch Index in Economics.” The Hirsch index for an author is the highest number h such that the author has h papers with at least h citations. So, an index of 5 means you have five papers with at least five citations and that you do not have six papers with at least six citations etc.
Glenn points out that the Hirsch index doesn’t do a great job at ranking economists. Nobel prize winner Roger Myerson’s Hirsch index is a mere 32. But he has a few papers with over a thousand citations. Seminal papers in economics tend to get a huge number of citations but most only get a few. So, the plain vanilla Hirsch index needs to be re-evaluated.
Glenn turns to the market to guide his measure. He studies an index of the form h is the highest number such that the author has at least h papers with at least a times h to the power b citations. The plain vanilla Hirsch index sets a=b=1. Glenn estimates a and b in various ways. In one method, he looks at the NRC department rankings and finds the variables a and b that best predict the NRC rank of a (young) economist’s department. To cut a long story short, a=5 and b=2 come out as the best predictors. With this estimation in hand, we can perform various comparisons – Which fields are highly cited? Which economists are highly cited? Etc..
Here are some tasty morsels of information. International finance, trade and behavioral economics are highly cited fields (Table 6). Micro theory and cross-sectional econometrics are the worst and IO does not do too well either. These facts mean Yale and NU, which are strong in these three areas, are under-cited economics departments. But basically one gets the picture that an economists citations are closely connected to the rank of the university where s/he is employed.
Ranking young economists, it is pretty obvious who is going to come out on top: Daron Acemoglu with an index of 7.84 (Table 7). This means Daron has 7.84 papers with roughly 300 citations. Ed Glaeser and Chad Jones are close behind. Once you adjust by field, more theorists start to rank highly: Glenn, Ilya Segal, Stephen Morris and Susan Athey pop up. Also, my friend Aviv Nevo gets a shout out as an underplaced guy.
A few comments:
Most of these people are tenured well before their citations go crazy. Expert opinion not data-mining leads to their tenure. This tells you how well expert opinion predicts citations. Also, to the extent that citations take time, expert opinion will always play a role in tenure decisions. There is a difference between external opinion and internal opinion. The same few people always get asked to write letters and they will do a good job. But internal opinions may be more noisy and depend on the quality of the department. Then, Glenn’s field-adjusted citation measure gives you some idea of a candidate’s quality and might be a valuable input into the tenure decision.
Finally, there are citations and citations. A paper getting regular cites in top journals is better than a paper getting cites in lower tier journals. This can be dealt with by improving the citation index.
At another extreme, some papers may be journalistic, not academic, and then their citations mean less. For example, Malcom Gladwell gets high citations for the Tipping Point but he did not do any of the original scientific research on which his book is based. Of course he writes wonderfully and comes up with amazing examples and he is clearly an intellectual. I bet Harvard would love to have him an as an adjunct professor but they will not give him a tenured professorship.
Despite these caveats, the generalized Hirsch index is an interesting input for academic decision-making.
Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.
The article surveys the usual problems with peer review and is worth a read. I recognize all of the problems and they are real but I am less bothered than most. We don’t really need journals for dissemination anymore so the only function they serve is peer-review. The slowness of the process is not so big a deal anymore. (Unless you are up for tenure this year of course.)
Also, it’s true that reviewers are often just looking for excuses to reject a paper. But that is mainly because they feel obliged to give some reason to justify their decision. In many cases bad papers are like pornography. You know them when you see them. Few referees are willing to write “I reject this paper because it’s not a good paper,” so they have to write something else. To the extent that this is a failure it’s because the effort in reading through the paper looking for an excuse could be more productively spent elsewhere.
Hard Hat Heave: Lance Fortnow.
I don’t want an iPad because I don’t want to carry around a big device just to read. I want to read on my iPhone. With one hand. (Settle down now. I need the other hand to hold a glass of wine.) But the iPhone has a small screen. Sure I can zoom, but that requires finger gestures and also scrolling to pan around. Tradeoff? Maybe not so much:
Imagine a box. Laying on the bottom of the box is a piece of paper which you want to read. The box is closed, but there is an iPhone sized opening on the top of the box. So if you look through the opening you can see part of the paper. (There is light inside the box, don’t get picky on me here.)
Now imagine that you can slide the opening around the top of the box so that even if you can only see an iPhone sized subset of the paper, you could move that “window” around and see any part of the paper. You could start at the top left of the box and move left to right and then back to the left side and read.
Suppose you can raise and lower the lid of the box so you have two dimensions of control. You can zoom in and out, and you can pan the iPhone-sized-opening around.
Now, forget about the box. The iPhone has an accelerometer. It can sense when you move it around. With software it can perfectly simulate that experience. I can read anything on my iPhone with text as large as I wish, without scrolling, by just moving the phone around. With one hand.
This should be the main UI metaphor for the whole iPhone OS.
From a worthwhile article in the NY Times surveying a number of facts about e-book and tree-book sales:
Another reason publishers want to avoid lower e-book prices is that print booksellers like Barnes & Noble, Borders and independents across the country would be unable to compete. As more consumers buy electronic readers and become comfortable with reading digitally, if the e-books are priced much lower than the print editions, no one but the aficionados and collectors will want to buy paper books.
Which, translated, reads: publishers don’t want low e-book prices because then people would buy them. Note that according to the article, profit margins are larger for e-books than for pulp. (Confused? Marginal revenue accounts for cross-platform cannibalization, and is still set equal to marginal cost.)
One case in which dropping copy protection improved sales.
It’s been 18 months since O’Reilly, the world’s largest publisher of tech books, stopped using DRM on its ebooks. In the intervening time, O’Reilly’s ebook sales have increased by 104 percent. Now, when you talk about ebooks and DRM, there’s always someone who’ll say, “But what about [textbooks|technical books|RPG manuals]? Their target audience is so wired and online, why wouldn’t they just copy the books without paying? They’ve all got the technical know-how.”So much for that theory.
Addendum: see the comments below for good reason to dismiss this particular datum.
You are late with a report and its not ready. Do you wrap it up and submit it or keep working until its ready? The longer it takes you the higher standard it will be judged by. Because if you needed the extra time it must be because its going to be extra good.
For some people the speed at which they come up with good ideas outpaces these rising expectations. Others are too slow. But its the fast ones who tend to be late. Because although expectations will be raised they will exceed those. The slow ones have to be early otherwise the wedge between expectations and their performance will explode and they will never find a good time to stop.
Compare Apple and Sony. Sony comes out with a new product every day. And they are never expected to be a big deal. Every single Apple release is a big deal. And highly anticipated. We knew Apple was working on a phone more than a year before the iPhone. It was known that tablet designs had been considered for years before the iPad. With every leak and every rumor that Steve Jobs was not yet happy, expectations were raised for whatever would eventually make it through that filter.
Dear TE referees. Nobody is paying attention to how late you are.
The Econometric Society which publishes Econometrica, one of the top 4 academic journals in Economics has taken under its wing the fledgling journal Theoretical Economics and the first issue under the ES umbrella has just been published. TE has rapidly become among the top specialized journals for economic theory and it stands out in one very important respect. All of its content is and always will be freely available and publicly licensed.
Bootstrapping a reputation for a new journal in a crowded field is by itself almost impossible. TE has managed to do this without charging for access, on a minimal budget supported essentially by donations plus modest submission fees, and with the help of a top-notch board of editors who embraced our mission. There is no doubt that the community rallied around our goal of changing the world of academic publishing and it worked.
This is just a start. Already the ES is launching a new open-access field journal with an empirical orientation, Quantitative Economics. Open Access is here to stay.
One of the least enjoyable tasks of a journal editor is to nag referees to send reports. Many things have been tried to induce timeliness and responsiveness. We give deadlines. We allow referees to specify their own deadlines. We use automated nag-mails. We even allow referees to opt-in to automated nag-mails (they do and then still ignore them.)
When time has dragged on and a referee is not responding it is typical to send a message saying something like “please let me know if you still plan to provide a report, otherwise i will try to do without it.” These are usually ignored.
A few years ago I tried something new and every time since then it has gotten an almost immediate response, even from referees who have ignored multiple previous nudges. I have suggested it to other editors I know and it works for them too. I have an intuition for why it works (and that’s why I tried it in the first place) but I can’t quite articulate it, perhaps you have ideas. Here is the clinching message:
I would like to respond soon to the authors but it would help me a lot if I could have your report. I realize that you are very busy, so if you think you will be able to send me a report within the next week, then please let me know. If you don’t think you will be able to send a report, then there is no need to respond to this message.
Its one of the many novel ideas from David K. Levine: the non-journal. You write your papers and you put them on your web site. Congratulations, you just published! Ah, but you want peer review. The editors of NAJ just might read your self-published paper and review it. We supply the peer-review, you supply the publication. Peer-review + publication = peer-reviewed publication. That was easy.
(NAJ is an acronym that stands for NAJ Ain’t a Journal.)
Its been around for a few years with pretty much the same set of editors. Its gone through some very active phases and some slow periods. David is trying to breathe some new life into NAJ by rotating in some new editors. So far so good. Arthur Robson is a new editor and he just reviewed a very cool paper by Emir Kamenica and Matthew Gentzkow called “Bayesian Persuasion.”
The paper tells you how a prosecutor manages to convict the innocent. Suppose that a judge will convict a defendant if he is more than 50% likely to be guilty and suppose that only 30% of all defendants brought to trial are actually guilty. A prosecutor can selectively search for evidence but cannot manufacture evidence and must disclose all the evidence he collects. The judge interprets the evidence as a fully rational Bayesian. What is the maximum conviction rate he can achieve?
The answer is 60%. This is accomplished with an investigation strategy that has two possible outcomes. One outcome is a conclusive signal that the defendant is innocent. Since the judge is Bayesian, the innocent signal occurs with probability zero when the defendant is actually guilty. The other outcome is a partially informative signal. If the prosecutor designs his investigation so that this signal occurs with probability 3/7 when the defendant is innocent (and with probability 1 when guilty) then
- conditional on this signal, the defendant is 50% likely to be guilty (we can make it strictly higher than 50% if you like by changing the numbers slightly)
- 3/7 of the innocent and all of the guilty will get this signal. (3/7 times 70%) + 30% = 60%.
The paper studies the optimal investigation scheme in a general model and uses it in a few applications.
Blame it on the binding constraint.
Let me explain. Has it ever struck you how peculiar it is that the price of so much writing these days is zero? No, I don’t mean that it is suprising that blogs don’t charge a price. There is so much supply that competition drives the price down to zero.
What I mean is, why are so many blogs priced at exactly zero. It would be a complete fluke for the optimal price of all of the blogs in the world to be at exactly the same number, zero.
And indeed the optimal price is not zero, in fact the optimal price is negative. Bloggers have such a strong incentive to have their writings read that they would really like to pay their readers. But for various reasons they can’t and so the best they can do is set the price as low as possible. That is, as it often happens, the explanation for the unlikely bunching of prices at the same point is that we are all banging up against a binding constraint.
(Why can’t we set negative prices? First of all, we cannot verify that you actually read the article. Instead we would have people clicking on links, pretending to read, and collecting money. And even if we could verify that you read the article, most bloggers wouldn’t want to pay just anybody to read. A blogger is usually interested in a certain type of audience. A non-negative price helps to screen for readers who are really interested in the blog, usually a signal that the reader is the type that the blogger is after.)
Now, typically when incentives are blunted by a binding constraint, they find expression via other means, distortionary means. And a binding price of zero is no different. Since a blogger cannot lower his price to attract more readers, he looks for another instrument, in this case the quality of the writing.
Absent any constraint, the optimum would be normal-quality writing, negative price. (“Normal quality” of course is blogger-specific.) When the constraint prevents price from going negative, the response is to rely more heavily on the quality variable to attract more readers. Thus quality is increased above its unconstrained optimal point.
So, the next time you are about to complain that the blogs you read are too interesting (at the margin), remember this, grin and bear it.