You are currently browsing Roger Myerson’s articles.

Charles Krauthammer has accused President Obama of not being seriously concerned about Israel’s security because the President seems disinclined to make hard military threats against Iran.  It is surprising that Mr. Krauthammer could make such accusations when he is calling for aggressive military brinksmanship that would expose Israel to the gravest strategic risks.

Game theory teaches us the importance of looking at any potential conflict from the perspectives of all the parties involved.  If Israel’s security depends on Iranian decisions, then anyone who really cares about Israel must try to look at the international situation from Iran’s perspective as well.

There are good reasons why Iranians should prefer not to have nuclear weapons.  First, as Thomas Friedman has noted, Iran’s possession of nuclear weapons could provoke other neighboring countries to get their own nuclear weapons, to preserve the regional balance of power. The resulting regional proliferation of nuclear weapons would make everyone in the region less safe.  Second, if Iran had nuclear weapons then Iranians would face risks of nuclear retaliation to a terrorist nuclear attack against Israel, even if the terrorists might have gotten their nuclear weapon from somewhere else.  These are two very significant reasons why Iranians could become less secure by acquiring nuclear weapons.  So what could be the advantages of nuclear weapons for Iran?

One potential advantage is that nuclear weapons might open some opportunities for profitable expansionism, perhaps taking control of some weak oil-rich neighbor in a moment of political instability.  If Saddam Hussein had had nuclear weapons when he invaded Kuwait in 1991, he might have been able to hold Kuwait by threatening that any counter-attack would escalate into a nuclear war.  Such potential for opportunistic expansionism would diminish, however, as other countries in the region acquired their own nuclear capabilities to defend the status quo.

The more important advantage is that nuclear weapons could make Iran immune to foreign invasion.  This is a serious concern that needs to be recognized.  In the past decade, the United State has invaded two countries that border Iran.  American politicians and public opinion leaders have regularly insisted that the possibility of military action against Iran should be “on the table.”  Keeping it “on the table” means making it something that Iranians have to worry about.  And as long as they have to worry about even a small chance of an American invasion which could have been deterred by nuclear weapons, the people of Iran, even opponents of the current regime, have at least one very significant reason to want their country to acquire nuclear weapons. (Or at least to create some ambiguity about their nuclear capability.)

So when prominent critics of President Obama, such as Mr Krauthammer, call for America to threaten military action against Iran, they are actually reinforcing Iran’s political determination to get its own nuclear arsenal.  If they were truly concerned about security for Israel or anyone else in the region, they would not be so eager to make such dangerously destabilizing threats.

Once we recognize the potential motivations for a country like Iran to acquire nuclear weapons, we can begin to look seriously for deterrent policies that address these motivations.  A more effective way for America to deter Iran from getting nuclear weapons would be to (1) announce that America would offer broad military security agreements to Iran’s neighbors if Iran acquired nuclear weapons and (2) offer real American friendship to Iran if it complies with international standards of nuclear nonproliferation.  A rapprochement between American and Iran would open up the possibility of cooperation for shared interests in stabilizing Afghanistan and Iraq, and it would eliminate Iran’s only real reason to make trouble for America’s ally Israel.

Although they have no common border, the security of Israel and the security of Iran have come to depend on each other.  Efforts to assure the security of both nations deserve bipartisan support in America.

Now at the end of 2011, as Tea-Party forces in the House of Representative finally bow to conventional economic wisdom, it may be time for economics professors to toss out some new radical ideas about public finance.  OK, let me try.

We have heard a lot of radical arguments for reducing taxation and reducing government debt, as the alleged keys to strong economic growth.  Actually, however, taxes tend to be a greater percentage of GDP in richer nations.  A reasonable hypothesis is that governments in poor nations may tax less and spend less because they have less fiscal capacity to control waste and corruption in the management of public funds.  Such nations that cannot manage public funds effectively then suffer from a lack of public infrastructure, which in turn may be a basic cause of their poverty.   That is, the greater wealth of rich nations may depend on their governments’ fiscal capacity to provide essential public goods with less waste than poor nations.  A good reliable system for managing public finances is one of the essential pillars of prosperity, as described in a new book by Timothy Besley and Torsten Persson.

The ultimate basis for all controls on public spending is of course political.  The strength of public financial management in America is ultimately based on democratic accountability of our political leaders for what they do with our taxes.  Democracy may not be perfect, but it is better than any other system for deterring corrupt misuse of public funds.

From this perspective, we can argue instead that the key to reviving long-term economic growth may be found in reforms to further improve political controls on public finance in our nation.

When voters are swayed by promises of lower taxes and lower deficits without loss of public services, it is clear that our system of democratic fiscal oversight has some room for improvement.  As the prosperity of our nation depends on the good judgment of a majority of voters, so voters depend on each others’ ability to understand questions of public finance.  That is, voters’ comprehension of public budgets is itself a public good.  We need a system to make sure that voters can understand our public budgets, and it is worth investing public money to develop such understanding.

So my radical proposal has two parts.  First, our federal, state, and local governments should publish their annual budgets online in a form that any high-school graduate can understand.  Second, our public high schools should be required to teach students how to read government budgets.

I know that this may sound impractically idealistic.  But I can recommend at least one good textbook: Dall Forsythe’s Memos to the Governor.  Written by a former budget director of New York state, this short book offers a good introduction to the standard tricks that have been used to make public spending more obscure.

Public officials, however well-intentioned, are under constant pressure to provide more public services for less money and so may feel regularly tempted to simulate such superior productivity by incurring new public debts that are not fully reported in the current year.  Devices for concealing billions of dollars of new debt cannot be fully secret, however, and the readers of Forsythe’s book will be well prepared to watch for them.  A better informed citizenry could prompt greater creativity in inventing new devices for concealment, of course.  But a greater force for clarity and transparency may be unleashed when millions of families have high-schools students who are asking why the public-finance materials have to be so confusing.

So this is my radical proposal:  Before demanding lower taxes or lower deficits, we voters should demand to be better informed about our public financial system.  Then our ability to demand better use of public funds can become a stronger pillar for future growth and prosperity.

When do rational people seek militant leadership for their nation?  By militant here, I mean bellicose or having an affinity for violent conflict.  I have began thinking about this question while writing a paper on the rise of Nazism in Germany after World War 1.  I would suggest that this question may be one of the most important for political theory.  As a practical matter, we certainly do not want neighboring nations to choose militant leaders against us, and so we should avoid putting them  in conditions that  might cause them to do so.  Thus, we need to understand what might cause normal rational citizens to support militant candidates for leadership of their nation.

People normally have very good reasons to not want militant national leaders.  We are all at risk when our leader would not hesitate to send our loved ones and ourselves off to die in battle.  To preserve the blessings of peace, we should normally prefer to have leaders of the nonmilitant sort, who have a healthy aversion to war.

But of course militant leaders can also have a positive deterrent effect.  When we have a militant national  leader, other nations might be less inclined to provoke any kind of trouble for us.  So a perceived threat of deep invasion can create an incentive for us to seek a militant leader who can deter it.  But we must also worry that a leader who has an affinity for war may take any opportunities that he can get to make one for us.  This potential cost of militancy is reduced, however, when the serious risks of war seem remote from our borders.  Thus, the incentive to seek militant leadership may be strongest when we fear a long-term or low-probability threat of a deeply destructive invasion but otherwise the immediate risks of conflict seem small.

This recipe was fulfilled in Germany around 1930.  The post-WW1 reparations involved a persistent implicit Allied threat to invade Germany if it did not pay, but the immediate risks of militancy became remote after Allied troops withdrew from the Rhineland under the Young Plan of 1929.

Such conditions also existed in America after the attack of September 11, 2001.  We felt profoundly vulnerable to deep invasion, but the immediate risks of our own militant posturing seemed remote.  And indeed, a demonstrated willingness to use military force became a positive asset in American presidential politics for several years in the aftermath of the attack.  We should understand that, even in America, politics could become more militant under such conditions.

You might have thought it obvious that the stock market would go down after S&P downgraded US government debt.  The bad news about US debt made investors worry, and worried investors are usually less enthusiastic about holding stocks.

But there is something wrong with this view.  Ask yourself, when fearful investors sell their stocks, what do they buy?  They sell their stocks for cash, of course; and then, being fearful, they typically want to keep the proceeds in the nearest thing to cash that pays interest: US government debt.  Thus, as investors’ demand for stocks goes down, their demand for dollars and other US liabilities goes up.  Such a surge in demand for US government debt would cause the price of US bonds to go up, which means that the interest rate on US debt would go down.  Doesn’t it seem paradoxical, that a downgrading of US government debt could cause demand for this debt to increase?

But sure enough, the New York Times described with some surprise that the United States Treasury was actually a beneficiary of the market shifts today (Aug 8), despite the downgrade of its debt, as 10-year yields fell to 2.32 percent from 2.56 percent, and the yield on the two-year Treasury note hit a record low.  Those who are worried about inflation should also notice that the decline in the stock market means that any given amount of dollars can actually buy more shares of the Dow Industrials.

So it is very difficult to see investors’ behavior today as a reaction to fears about inflationary deficits or a default on US government debt.  If serious investors were actually worried about the real value of US government liabilities then they should tend to move out of bond markets and into real investments like the stock market, which should drive stock prices up.  Such a move would help get real investment started, which would help get people back to work; but that is not what we saw today.

To understand what is really happening, we need to think more carefully about the risks that S&P was assessing.  Of course the US government as a whole cannot be incapable of paying the dollars that it owes, because the US government has the ability to print dollars itself.  So how can the S&P bond raters have any legitimate concerns about a possibility of the US government defaulting on its debt obligations?  The answer is that a default could happen only if one part of the US government prevented other parts from paying the bills.

The bills of the US government are paid by the Treasury Department, but the ability to print dollars is vested in the Federal Reserve.  The Treasury can get new dollars by issuing bonds that are purchased by the Federal Reserve.  But, although the bonds in such a transaction would be held by the government itself, they would still be counted in the aggregate debt that is restricted by Congress’s debt limit.  So when the US Congress refused to raise the debt limit, it was threatening to prevent the Treasury and Federal Reserve from working together to pay for the government’s budgeted expenses and debt obligations.  In a situation like that, the President would indeed have to choose between cutting budgeted government expenses or asking the bond holders to wait.

We have seen, however, that the investors’ movement from stocks to bonds today is very hard to reconcile with fears of default on these bonds.  So to explain the stock market decline, we have to look at the other side of the story, the very real possibility that a politically constrained US government might have to cut expenses for essential government services.  Broad fears of a crippled US government that is unable to enforce laws or invest in infrastructure could do very serious damage to investment and economic growth in America.  Indeed the possibility of such government paralysis could be far more economically damaging than any marginal increase in taxes.

Thus, there is every reason to believe that investors are reacting, not to fears of too much government debt, but to fears of too little government spending where it is needed.  Investors are expressing fears that the US government may become unable to do its essential part in maintaining the strength of this country.  Pundits and congressmen should take note.

Yuliya Tymoschenko, a former prime minister of Ukraine, wrote in Al Jazeera that Egyptians should learn from the disappointments that followed Ukraine’s 2004 Orange Revolution. She advises that democracy is not made by elections alone, and that a strong civil society is needed to protect a new democracy from being hijacked by elements of the old regime who only pretend to embrace democratic norms. But she also describes civil society as an intricate and mysterious entity that evolves over decades, if not centuries.

This advice is good, but it is also a call for social scientists to develop a deeper and more precise understanding of what is needed from civil society, if we are to understand the essential factors for successful democratic development. It is not enough to advise Egyptians or Ukrainians to go back and have a century or two like England around 1600 before they attempt to build a democracy. We must try to identify the essence of what is needed from civil society and how it can be most directly developed where it is lacking.

It may be instructive to consider an example from Maye Kassem’s discussion of civil society in her insightful book on Egyptian Politics. Kassem describes how opposition groups, in their efforts to acquire some autonomy from state domination under the Mubarak regime, cooperated to gain positions of leadership in professional organizations. This process was tolerated by the government until the Cairo earthquake of 1992. But then the Doctors’ and Engineers’ Syndicates, under leadership from the Muslim Brotherhood, provided conspicuously better assistance to earthquake victims than the official government relief agencies. In response, new laws were decreed to give the government greater control over such professional syndicates.

This story puts the focus on one essential factor that institutions of civil society can provide: an independent supply of people who have good reputations for providing public services. From the perspective of economic theory, we easily see how the supply of such reputations can be vital for democratic competition. Voters are unlikely to hold corrupt leaders accountable in democratic elections if they believe that alternative candidates would be as bad or worse. More generally, economists understand that market competition may fail to reduce suppliers’ profits when there are high barriers against new competitive entrants, and this insight can be applied to political competition (where political profit-taking is called corruption). Autonomous public-service organizations allow new leaders to develop the reputations that they need to compete for the voters’ trust, thus facilitating new competitive entry into the political arena.

So the key question to ask is: what is the best way for a newly democratic nation to increase its vital supply of individuals who have good reputations for using resources responsibly in public service? The obvious place to look is in subnational governments for provinces and cities, where independently elected political leaders could demonstrate their qualifications to compete for offices at higher levels. An elected mayor or governor who provides better public services than the established national leaders can become a strong competitive candidate for president or prime minister in the future.

Unfortunately Ukraine’s Constitution does not allow such opportunities for independent local leaders to prove themselves below the national level. In the governments of Ukraine’s provinces (oblasts) and major cities, executive authority is exercised by governors and mayors who are appointed and dismissed by the national President. Such centrally appointed officials have no incentive to serve the public better than the leader on whom their jobs depend. If Ms. Tymoschenko truly wants to build a stronger system of democratic competition in Ukraine, she might start by proposing a constitutional amendment to allow locally elected councils to choose their own governors and mayors.

If the new leadership in Egypt wants to maintain a political system in which national leaders are protected from serious competition from below, then they may be expected to craft a similar constitution that allows a President or Prime Minister to control state power at all levels. But even under old regime, Egypt’s Constitution promised (in Article 162) that popularly elected local councils would be gradually formed and given local governmental authority. If the leaders of the new regime are serious about promoting competitive democracy in Egypt, they could do well by fulfilling this promise and writing a new constitution that devolves a substantial share of power to separately elected provincial governments. Supporters of democracy in Egypt should watch carefully what they do on this dimension.

This year, Germany finally paid off its old bonds for World War 1 reparations, as Margaret MacMillan has noted in the New York Times.  MacMillan asserts that “John Maynard Keynes, a member of the British delegation in Paris, rightly argued that the Allies should have forgotten about reparations altogether.” Actually, the truth is more complicated.  A fuller understanding of Keynes’s role in the 1919 Paris peace conference after World War 1 may also offer a useful perspective on his contributions to economics.

Keynes became the most famous economist of his time, not for his 1936 General Theory, but for his Economic Consequences of the Peace (1920) and A Revision of the Treaty (1922). These were brilliant polemics against the 1919 peace conference, exposing the folly of imposing on Germany a reparation debt  worth more than 3 times its prewar annual GDP, which was to be repaid over a period of decades.

Germans saw the reparations as unjust extortion, and efforts to accommodate the Allies’ demands undermined the government’s legitimacy, leading to the rise of Nazism and the coming of a second world war. Keynes seemed to foresee the whole disaster. In his 1922 book, he posed the crucial question: “Who believes that the Allies will, over a period of one or two generations, exert adequate force over the German government to extract continuing fruits on a vast scale from forced labor?”

But what Keynes actually recommended in 1922 was that Germany should be asked to pay in reparations about 3% of its prewar GDP annually for 30 years. The 1929 Young Plan offered Germany similar terms and withdrew Allied occupation forces from the German Rhineland, but the Nazis’ rise to national power began after that.

In his 1938 memoirs, Lloyd George tells us that, during World War 1, Germany also had plans to seize valuable assets and property if they won WW1, “but they had not hit on the idea of levying a tribute for 30 to 40 years on the profits and earnings of the Allied peoples.  Mr. Keynes is the sole patentee and promoter of that method of extraction.”

How did Keynes get it so wrong on reparations? In 1871, after the Franco-Prussian War, Germany demanded payments from France, on a less vast scale (only a fraction of France’s annual GDP), while occupying northern France. To hasten the withdrawal of German troops, France made the payments well ahead of the required 3-year schedule, mainly by selling bonds to its own citizens. But the large capital inflow destabilized Germany’s financial system, which then led to a recession in Germany. Before 1914, some argued that such adverse consequences of indemnity payments for a victor’s economy would eliminate incentives for war and assure world peace. In response to such naive arguments, Keynes suggested in 1916 that postwar reparation payments could be extended over decades to avoid macroeconomic shock from large short-term capital flows and imports from Germany.

Nobody had ever tried to extract payments over decades from a defeated nation without occupying it, but that is what the Allies attempted after World War 1, following Keynes’s suggestion. Keynes argued about the payments’ size but not their duration.

Today economists regularly analyze the limits on a sovereign nation’s incentive to pay external debts. In our modern analytical framework, we can argue that the scenario of long-term reparation payments was not a sequential equilibrium. But such analysis uses game-theoretic models that were unknown to Keynes. As a brilliant observer, he certainly recognized the political problems of motivating long-term reparation payments over 30 years or more, but these incentive problems did not fit into the analytical framework that guided him in formulating his policy recommendations. So while condemning the Allies’ demands for Germany to make long-term reparation payments of over 7% of its GDP, Keynes considered long-term payments of 3% of GDP to be economically feasible for Germany, regardless of how politically poisonous such payments might be for its government. Considerations of macroeconomic stability could crowd out strategic incentive analysis for Keynes, given the limits of economic analysis in his time.

Reviewing this history today, we should be impressed both by Keynes’s skill as a critical observer of great policy decisions but also by the severe limits of Keynes’s analytical framework for suggesting better policies. Advances in economic theory have greatly expanded the scope of economic analysis since Keynes’s day and have given us a better framework for policy analysis than what Keynes ever had.

It has been suggested that Keynesian economics remains the best framework that we have for making sense of recessions, but that macroeconomic theory also needs to do a better job of incorporating the realities of finance. There may be a fundamental contradiction between these two suggestions.

In their book Microeconomics of Banking, Xavier Freixas and Jean-Charles Rochet noted that there was no microeconomic theory of banking before the 1970s.  Banks and other financial intermediaries earn their profits by knowing more than depositors about the quality of borrowers’ investments.  So an economic theory of banking requires an ability to analyze transactions among agents who have different information.  Economists first developed such agency theories only around 1970, building on previous advances in game theory.

So John Maynard Keynes’s 1936 General Theory and other classic theories of macroeconomics were developed when there was no real economic theory of banking.  Inevitably this limited the scope of their analysis.  For example, if the 1933 Glass–Steagall Act of banking regulatory reform was essential for halting America’s catastrophic slide into the Great Depression, there would be no way to incorporate that fact into the analysis without an economic theory of banking.

An economic theorist who rereads the General Theory today may be struck by the absence of any serious analysis of how massive bank failures could have been involved in causing the Great Depression.  In chapter 11, Keynes briefly discussed moral hazard in lending, but he had no analytical framework to use these insights, and they tended to get lost in the discussion.

But Keynes was a brilliant observer, even when he could not fit his observations into his theories.  For a contrasting view on the role of banks, look at Keynes’s previous book, his 1930 Treatise on Money.  Near the end of that book, in chapter 37, Keynes made the following observation:

“The relaxation or contraction of credit by the Banking System does not operate merely through a change in the rate charged to borrowers; it also functions through a change in the abundance of credit.  If the supply of credit were distributed in an absolutely free competitive market, these two conditions, quantity and price, would be uniquely correlated with one another and we should not need to consider them separately.  But in practice, the conditions of a free competitive market for bank-loans are imperfectly fulfilled.  There is an habitual system of rationing in the attitude of banks to borrowers — the amount lent to any individual being governed not solely by the security and rate of interest offered, but also by reference to the borrower’s purposes and his standing with the bank as a valuable or influential client.  Thus, there is normally a fringe of unsatisfied borrowers who are not considered to have the first claims on a bank’s favours, but to whom the bank would be quite ready to lend if it were to find itself in a position to lend more.  The existence of this unsatisfied fringe allows the Banking System a means of influencing the rate of investment supplementary to the mere changes in the short-term rate of interest.”

There is an interesting suggestion here that even short-term loans might implicitly depend on long-term relationships between investors and financial intermediaries. Such an idea could be the basis for a theory of macroeconomic fluctuations in which bank failures could affect investment.

In 1936, however, Keynes could not build a theory in which monetary policy could affect aggregate investment other than through its effect on the interest rate.  His 1930 observation got lost in his subsequent analysis because it did not fit into his analytical framework.  He had no way to answer the obvious question: If so many eager qualified borrowers are unable to get loans at the current interest rate, why don’t banks offer to lend to them at a higher interest rate?  Today economists understand how such credit rationing can be derived from considerations of adverse selection or moral hazard in borrowing.  The classic introduction to the subject is by Joseph Stiglitz and Andrew Weiss in 1981.

Sometimes it seems that they only understand the language of force. We have learned from history that the appeasement of an aggressive adversary can be a disastrous mistake, with our concessions only encouraging further attacks on our communities. To deter them from attacking us, we need armed strength, and our leaders must demonstrate the resolve to use it when necessary.” What should we say to someone who describes the Israeli-Palestinian conflict in these terms?

A game theorist is trained to look at conflict problems from both sides, assuming that people on both sides are rational and intelligent. I have tried to write the above quote as one that many Israelis and Palestinians might consider a fair description of their situation, symmetrically identifying themselves as “we” and the other side as “they”, but the symmetry of this view is probably not common knowledge. In particular, many may not understand the other side’s fear of appeasing an aggressive adversary. Such misunderstanding can undermine hopes for peace.

In the above quote, the response to our armed strength that “we” seek from “them” is, in a word, appeasement. We want them to appease us. But why should they not fear that concessions to us could encourage our greater ambitions, inviting further invasion of their communities? And if the demand for armed vigilance on each side is matched by a fear of appeasement on the other side, how can the two sides ever escape from the long war of attrition?

We must think more carefully about the logic of deterrent strategies. Our strategy to deter potential adversaries must have two parts: a threat that we will fight them if they attack us, and a promise that we will be good restrained neighbors if they accommodate us. The difference between our threat and our promise is what encourages them toward accommodation. For our deterrent strategy to be effective, our potential adversaries must understand and believe both the threat and the promise.

Failing to credibly communicate the threat is naive appeasement. Our potential adversaries must not think that we are the weak type of people, who lack resolve to respond forcefully against aggression. To prove that we are not weak may require costly signals of our resolve, many of which have become too familiar: sending out young men on deadly missions against the other side.

But deterrence can fail also if we do not credibly communicate a promise that differs from the threat. If they believe that we are an aggressive type, who cannot restrain ourselves from invading their communities further at any feasible opportunity, then they will feel driven to seek militant leaders against us, and then we will be locked in conflict with them.

How can we demonstrate to them that we are not such an aggressive type? This is a very serious question, because everyone knows that aggressors may try to mask their intentions with honeyed words of peace. The point is not to convince ourselves of our own moral purity; the goal is to convince our adversaries that they can safely make peace with us.

We can effectively signal our restraint by articulating clear strategic limits that verifiably constrain our actions in the conflict, and by showing real understanding and respect for justice as our adversaries see it. Credibly communicating our promise of restraint to a suspicious adversary can be a long and difficult process, but it is an essential part of effective communication in the language of force.

That is the theory. Today we learned that Israel has resisted intense American pressure to freeze expansions of its settlements in the West Bank. It is hard to see this decision as a signal of restraint. Indeed, it seems just the opposite. In rejecting its strongest ally’s interpretation of the legal limits on its expansion, Israel seems to have given a costly signal of an inability to restrain expansionist forces in its political system. Nobody can enter into a treaty without confidence that the other side will accept a mutually agreed interpretation of its limits under the treaty. As a costly signal that reduces the other side’s willingness to make peace, this decision may be less stark than missiles from Gaza, but it is only a matter of degree.

Paul Krugman has attacked Senator Evan Bayh‘s suggestion that the Obama administration “overreached by focusing on health care rather than job creation during a severe recession.” Krugman expresses great difficulty in seeing what this statement could mean.

Are people who say that Mr. Obama should have focused on the economy saying that he should have pursued a bigger stimulus package? Are they saying that he should have taken a tougher line with the banks? If not, what are they saying? That he should have walked around with furrowed brow muttering, “I’m focused, I’m focused”?

To answer Paul Krugman’s question, let me suggest that the severity of the current recession could have been reduced if the Obama administration had made financial regulatory reform its top legislative priority in 2009.

When President Obama was inaugurated in January 2009, the American economy was sliding into recession because of a catastrophic loss of confidence in our financial system. In the previous decade, global investors’ confidence in American financial institutions had brought vast capital inflows into the American economy. This confidence had been based on a perception that America’s legal and political system provided safeguards for investors that were second to none in the world. This confidence was shattered in 2008 with the collapse of Lehman Brothers.

A meaningful and effective financial regulatory reform in 2009 could have restored investors’ confidence, reviving investment flows and stemming the loss of jobs. Imagine how different the economic environment might have been on election day in 2010 if President Obama could have announced by September 2009 that, after an intensive review of the financial regulatory system by both Congress and the White House, he was signing into law some carefully designed and well-focused reforms that could restore investors’ and taxpayers’ confidence in American financial institutions.

The focus on health care reform made it impossible to achieve meaningful financial regulatory reform for more than a year after President Obama took office. Health care reform and financial regulatory reform are both extremely complex issues and both have been fiercely resisted by powerful vested interests. Neither reform could be accomplished without strong political leadership at the highest level. The Obama administration could only address one at a time, and only one could be the central focus in the crucial first year when the new President’s political capital was greatest. The Obama administration chose in 2009 to focus on health care reform.

It may be surprising that, when a catastrophic macroeconomic decline is clearly being caused by a loss of confidence in the basic regulatory controls of our financial system, that many leading economists would not see financial regulatory reform as an urgently needed remedy. The reasons may also be found in the history of economic theory, on which I may comment later.

[Thanks to Jeff and Sandeep for letting me guest-blog here.]

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,154 other subscribers