Fed Watch: More Confirmation of Steady Monetary Policy

Tim Duy sees, among other things, the possibility of another bubble:

More Confirmation of Steady Monetary Policy, by Tim Duy: Green shoots - or, as President Obama says - the beginning of the end of the recession aside, the Fed will not be ready to reverse their accommodative policy stance anytime soon. New York Federal Reserve  President William Dudley said as much in a speech today:

If the recovery does, in fact, turn out to be lackluster, the unemployment rate is likely to remain elevated and capacity utilization rates unusually low for some time to come. This suggests that inflation will be quiescent. For all these reasons, concern about “when” the Fed will exit from its current accommodative monetary policy stance is, in my view, very premature.

The Fed continues to expect that low levels of resource utilization will keep a lid on inflation. While some might object that emerging market economies can have both weak growth and high inflation, those economies still have an important transmission mechanism between higher prices and higher wages that appears to be missing in the US. Indeed, while the press focused on the old news "recession is ending" angle of the Beige Book, the money quote for policymakers was:

The weakness of labor markets has virtually eliminated upward wage pressure, and wages and compensation are steady or falling in most Districts; however, Boston cited some manufacturing and business services firms raising pay selectively, and Minneapolis said wage increases were moderate. Boston, Cleveland, Richmond, Chicago, Dallas, and San Francisco cited a range of methods firms are using to limit compensation, including cutting or freezing wages or benefit contributions, deferral of future salary increases, trimming bonuses and travel allowances, reducing hours, temporary shutdowns, periodic furloughs, and unpaid vacations.

Until economic growth is sufficient to propel wages upward, any residual price pressures are likely to be snuffed out by deteriorating real wage growth. Will the job market improve anytime soon? We get a fresh look at initial unemployment claims tomorrow morning, but the July consumer confidence report from the Conference Board indicates that households see a deteriorating jobs picture:

The share of consumers who said jobs are plentiful dropped to 3.6 percent, the lowest level since February 1983. The proportion of people who said jobs are hard to get climbed to 48.1 percent from 44.8 percent.

Lacking a story that leads to strong wage growth in the near - or even medium - term, the Fed is almost certainly on hold at least through this year and likely well into 2010, allowing the size of the balance sheet to adjust according to the needs of the financial markets while keeping interest rates at rock bottom levels. That doesn't mean all that easy money will not show up somewhere - technical analysts are looking for US equities to explode on the basis of recent market action. But will the Fed lean against such an explosion without clear and convincing evidence that the labor market is poised for strong, sustainable improvement? I doubt it - and for those looking for it, therein lies the ingredients for making the next big bubble.

Sluggish Wages and Employment

Following up briefly on part of Tim's post, once the economy turns the corner, for wages to increase two things must happen. First, there is a lot of expansion that can come from currently employed workers through expanded hours, reversing temporary shutdowns, eliminating forced furloughs, no longer allowing unpaid vacations, those sorts of things. These bring hours and other work conditions back to normal and hence do not place much if any upward pressure on wages. There is a lot of slack in hours alone that can be taken up before the existing workforce is fully utilized, and adding back hours that have been taken away does not require an increase in wages. (There are some cases where the wage rate was cut instead of hours, and even some cases where both happened, but because the proportion of firms that cut wages is relatively small, even if those wage cuts are reversed it would not have much of an effect on the overall wage rate, and it would be a one-time change in wages in any case, not continuous upward wage pressure).

Second, even if the existing workforce reaches normal (full) employment conditions, there are still a lot of workers who are unemployed, and they can be hired at the existing wage rate. It is not until the existing workforce returns to normal and the unemployed find new jobs that wages come under pressure. When the economy is at full employment, expanding the number of workers in a particular firm requires that they be bid away from other opportunities, and that pushes wages up. But when there is unemployment, there are no alternative opportunities and hence no upward pressure on wage rates.

Finally, note that when there is slack in the existing labor force due to a decline in hours worked, etc., there will be a delay between the time the economy turns around and the time when employment begins increasing. This isn't the only reason there is a delay in the response of employment, but it contributes to it.

“Some Thoughts on Wages and Competitiveness”

Another follow-up to Tim's post gleaned from a post by Karl Whelan at The Irish Economy:

Some Thoughts on Wages and Competitiveness, by Karl Whelan: There’s a lively debate going on about ... competitiveness and recovery...

Despite what seems to me to be an exceptionally strong attitude in this country [Ireland] of calling on the government to solve every possible problem, we are largely a market economy and wage rates are set in a relatively decentralised fashion compared with other European countries.  And despite the faith of many that unregulated labour markets should always clear to produce full employment, we have plenty of macroeconomic evidence that this is not the case.

The reality is that, in all economies, negative macroeconomic shocks tend to raise unemployment because wages never adjust quickly enough to get the labour market back to full employment.  This has been a mainstream theme in macroeconomics since, at least, the General Theory. 

In more recent decades, New Keynesian macroeconomic theorists have put forward a plethora of models to explain why the labour market does not operate according to the simple market-clearing  fashion (efficiency wages, implicit contract theory, bargaining models based on “holdups”).  More recently, behavioural economists have documented the importance of “money illusion’’ which makes workers particularly resistant to cuts in nominal wages. The result is a significant amount of empirical evidence demonstrating the existence of nominal and real wage rigidity.

This is not to argue that wages are completely rigid or that the labour market does not have mechanisms to bring unemployment down after a negative shock.  Macroeconomic data generally show good fits for Phillips Curve relationships such that wage growth is low when unemployment is high.  But governments will generally not want to rely only on this mechanism to restore macroeconomic equilibrium because the pace of recovery will be too slow. Instead, they prefer, where possible, to use countercyclical fiscal and monetary policy. ...

I should note that the argument in the full post gives more credence to wage cuts as a recession fighting strategy that I would. Here's Paul Krugman on the topic:

[W]e may be facing the paradox of wages: workers at any one company can help save their jobs by accepting lower wages, but when employers across the economy cut wages at the same time, the result is higher unemployment.

Here’s how the paradox works. Suppose that workers at the XYZ Corporation accept a pay cut. That lets XYZ management cut prices, making its products more competitive. Sales rise, and more workers can keep their jobs. So you might think that wage cuts raise employment — which they do at the level of the individual employer.

But if everyone takes a pay cut, nobody gains a competitive advantage. So there’s no benefit to the economy from lower wages. Meanwhile, the fall in wages can worsen the economy’s problems on other fronts.

In particular, falling wages, and hence falling incomes, worsen the problem of excessive debt: your monthly mortgage payments don’t go down with your paycheck. America came into this crisis with household debt as a percentage of income at its highest level since the 1930s. Families are trying to work that debt down by saving more than they have in a decade — but as wages fall, they’re chasing a moving target. And the rising burden of debt will put downward pressure on consumer spending, keeping the economy depressed.

Things get even worse if businesses and consumers expect wages to fall further in the future. John Maynard Keynes put it clearly, more than 70 years ago: “The effect of an expectation that wages are going to sag by, say, 2 percent in the coming year will be roughly equivalent to the effect of a rise of 2 percent in the amount of interest payable for the same period.” And a rise in the effective interest rate is the last thing this economy needs.

Concern about falling wages isn’t just theory. Japan — where private-sector wages fell an average of more than 1 percent a year from 1997 to 2003 — is an object lesson in how wage deflation can contribute to economic stagnation.

“Surprising Comparative Properties of Monetary Models”

I need to read this paper:

Surprising Comparative Properties of Monetary Models: Results from a New Data Base, by John B. Taylor and Volker Wieland, May 2009 [open link]: Abstract: In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy. We make use of a new data base of models designed for such investigations. We focus on three representative models: the Christiano, Eichenbaum, Evans (2005) model, the Smets and Wouters (2007) model, and the Taylor (1993a) model. Although the three models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, the optimal monetary policy responses to other sources of economic fluctuations are widely different in the different models. We show that simple optimal policy rules that respond to the growth rate of output and smooth the interest rate are not robust. In contrast, policy rules with no interest rate smoothing and no response to the growth rate, as distinct from the level, of output are more robust. Robustness can be improved further by optimizing rules with respect to the average loss across the three models.

links for 2009-07-30

“How Wars, Plagues, and Urban Disease Propelled Europe’s Rise to Riches”

[Note: Travel day today, so I am letting things that are posted do most of the talking. I'll add what I can along the way.]

"This column explains why Europe’s rise to riches in the early modern period owed much to exceptionally bellicose international politics, urban overcrowding, and frequent epidemics."

Cruel windfall: How wars, plagues, and urban disease propelled Europe’s rise to riches, by Nico Voigtländer and Hans-Joachim Voth, Vox EU: In a pre-modern economy, incomes typically stagnate in the long run. Malthusian regimes are characterised by strongly declining marginal returns to labour. One-off improvements in technology can temporarily raise output per head. The additional income is spent on more (surviving) children, and population grows. As a result, output per head declines, and eventually labour productivity returns to its previous level. That is why, in HG Wells' phrase, earlier generations "spent the great gifts of science as rapidly as it got them in a mere insensate multiplication of the common life" (Wells, 1905).

How could an economy ever escape from this trap? To learn more about this question, we should look more closely at the continent that managed to overcome stagnation first. Long before growth accelerated for good in most countries, a first divergence occurred. European incomes by 1700 exceeded those in the rest of the world by a large margin. We explain the emergence of this income gap by a number of uniquely European features – an unusually high frequency of war, particularly unhealthy cities, and numerous deadly disease outbreaks.

The puzzle: The first divergence in worldwide incomes

European incomes by 1700 were markedly higher than they had been in 1500. According to the figures compiled by Angus Maddison (2001), all European countries including Mediterranean ones saw income growth of 35% to 180%. Within Europe, the northwest did markedly better than the rest. English and Dutch real wages surged during the early modern period.

How exceptional was this performance? Pomeranz (2000) claimed that the Yangtze Delta in China was just as productive as England. Detailed work on output statistics suggests that his claims must be rejected. While real wages in terms of grain were some 15-170% higher in England, English silver wages exceeded those of China by 120% to 550%. Since grain was effectively an untraded good internationally before 1800, the proper standard of comparison is the silver wage. Estimates for India suggest a similar gap vis-à-vis Europe (Broadberry and Dasgupta, 2006).

Urbanisation figures support this conclusion. They serve as a good proxy since people in towns need to be fed by farmers in the countryside. This requires a surplus of food production, which implies high labour productivity. Since agriculture is the largest single sector in all pre-modern economies, a productive agricultural sector is equivalent to high per capita output overall. Figure 1 compares European and Chinese urbanisation rates after the year 1000 AD. Independent of the series used, European rates increase rapidly during the early modern period. Our preferred measure – the DeVries series – increases from 5% to nearly 10% between 1500 and 1800. The contrast with China is striking. There, urbanisation stagnated near the 3% mark.

Figure 1. Europe versus China urbanisation rates, 1000-1800

Voxx1

In a Malthusian world, a divergence in living standards should be puzzling. Income gains from one-off inventions should have been temporary. Even ongoing productivity gains cannot account for the “first divergence” – TFP growth probably did not exceed 0.2%, and cannot explain the marked rise in output per capita.

The answer: Rising death rates and lower fertility

In a Malthusian world, incomes can increase if birth rates fall or death rates increase (Clark, 2007). Figure 2 illustrates the basic logic. Incomes are pinned down by the intersection of birth and death schedules (denoted b and d). The initial equilibrium is E0. If death rates shift out, to d’, incomes rise to the new equilibrium Ed1. Similarly, lower birth rates at any given level of income will lead to higher per capita incomes. In combination, shifts of the birth and death schedules to b’ and d’ will move the economy to equilibrium point E2.

Figure 2. Birth and death rates, and equilibrium per capita income

Voxx2

We argue that there were three factors – which we call the “Three Horsemen of Riches” – that shifted Europe’s death schedule outwards: wars, epidemics, and urban disease. Wars were unusually frequent. Epidemics were common, with devastating consequences. Finally, cities were particularly unhealthy, with death rates there exceeding birth rates by a large margin – without in-migration, European cities before 1850 would have disappeared.

Figure 3 shows the percentage of the European population affected by wars (defined as those living in areas where wars were fought). It rises from a little over 10% to 60% by the late seventeenth century. Tilly (1992) estimated that, on average, there was a war being fought somewhere in nine out of every ten years in Europe in the early modern period.

Political fragmentation combined with religious strife after 1500 to form a potent mix that produced almost constant military conflict. While the fighting itself only killed few people, armies marching across Europe spread diseases. It has been estimated that a single army of 6,000 men, dispatched from La Rochelle to fight in the Mantuan war, killed up to a million people by spreading the plague (Landers, 2003).

Figure 3. Share of European population in war zones

Voxx3

European cities were much unhealthier than their Far Eastern counterparts. They probably had death rates that exceeded rural ones by 50%. In China, the rates were broadly the same in urban and rural areas. The reason has to do with differences in diets, urban densities, and sanitation:

  • Europeans ate more meat, and hence kept more animals in close proximity,
  • European cities were protected by walls due to frequent wars, which could not be moved without major expense, and
  • Europeans dumped their chamber pots out of their windows, while human refuse was collected in Chinese cities and used as fertiliser in the countryside.

Epidemics were also frequent. The plague did not disappear from Europe after 1348. Indeed, plague outbreaks continued until the 1720s, peaking at over 700 per decade in the early 17th century. In addition to wars, epidemics were spread by trade. The last outbreak of the plague in Western Europe occurred in Marseille in 1720; a merchant vessel from the Levant spread the disease, causing 100,000 men and women to perish. Since Europe has much greater variety in terms of geography and climate than China, disease pools remained largely separate. When they became increasingly connected as a result of more trade and wars, mortality spiked.

Triggering European “exceptionalism”

In combination, the “Three Horsemen” – war, urbanisation, and trade-driven disease – probably raised death rates by one percentage point by 1700. Once death rates were higher, incomes could remain at an elevated level even in a Malthusian world. The crucial question then becomes why Europe developed such a particular set of factors driving up mortality.

We argue that the Great Plague of 1348-50 was the key. Between one third and one half of Europeans died. With land-labour ratios now higher, per capita output and wages surged. Since population losses were massive, they could not be compensated quickly. For a few generations, the old continent experienced a “golden age of labour”. British real wages only recovered their 1450s peak in the age of Queen Victoria (Phelps-Brown and Hopkins, 1981).

Temporarily higher wages changed the nature of demand. Despite having more children, people had more income than necessary for mere subsistence – population losses were too large to be absorbed entirely by the demographic response. Some of the surplus income was spent on manufactured goods. These goods were mainly produced in cities. Thus, urban centres grew in size. Higher incomes also generated more trade. Finally, the increasing number and wealth of cities expanded the size of the monetised sector of the economy. The wealth of cities could be taxed or seized by rulers. Resources available for fighting wars increased – war was effectively a superior good for early modern princes. Therefore, as per capita incomes increased, death rates rose in parallel. This generates a potential for multiple equilibria. Figure 4 illustrates the mechanism. The death rate increases over some part of the income range, which maps into urbanisation rates. Starting at E0, a sufficiently large shock will move the economy to point EH, where population is again stable.

Figure 4. Equilibria with “Horsemen effect”

Voxx4

In the discussion paper, we calibrate our model. The effect of higher mortality on living standards is large. We find that we can account for more than half of Europe’s precocious rise in per capita incomes until 1700.

Conclusions

To raise incomes in a Malthusian setting, death rates have to rise or fertility rates have to decline. We argue that a number of uniquely European characteristics – the fragmented nature of politics, unhealthy cities, and a geographically heterogeneous terrain – interacted with the shock of the 1348 plague to create exceptionally high mortality rates. These underpinned a high level of per capita income, but the riches were bought at a high cost in terms of human lives.

At the same time, there are good reasons to think that it is not entirely accidental that the countries (and regions) that were ahead in per capita income terms in 1700 were also the first to industrialise. How the world could escape the Malthusian trap at all has become a matter of intense interest to economists in recent years (Galor and Weil, 2000, Jones, 2001, Hansen and Prescott, 2002). In a related paper, we calibrate a simple growth model to show why high per capita income at an early stage may have been key for Europe’s rise after 1800 (Voigtländer and Voth, 2006).

In the “Three Horsemen of Riches”, we ask how Europe got to be rich in the first place. Our answer is best summarised by the smuggler Harry Lime, played by Orson Welles in the 1948 classic “The Third Man“:

"In Italy, for thirty years under the Borgias, they had warfare, terror, murder, bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love; they had 500 years of democracy and peace – and what did that produce? The cuckoo clock."

We argue that a similar logic held in economic terms before the Industrial Revolution. Europe’s exceptional rise to early riches owed much to forces of destruction – war, aided by frequent disease outbreaks and deadly cities.

References

Bairoch, P., J. Batou, and P. Chèvre (1988). La Population des villes Europeennes de 800 à 1850: Banque de Données et Analyse Sommaire des Résultats. Geneva: Centre d’histoire economique Internationale de l’Université de Genève, Libraire Droz.

Broadberry, S. and B. Gupta (2006). “The Early Modern Great Divergence: Wages, Prices and Economic Development in Europe and Asia, 1500-1800”. Economic History Review 59, 2–31.

Chow, G. C. and A. Lin (1971). “Best Linear Unbiased Interpolation, Distribution, and Extrapolation of Time Series by Related Series”. Review of Economics and Statistics 53(4), 372–375.

Clark, G. (2007). A Farewell to Alms: A Brief Economic History of the World. Princeton: Princeton University Press.

de Vries, J. (1984). European Urbanization 1500-1800. London: Methuen.

Galor, O. and D. N. Weil (2000). “Population, Technology and Growth: From the Malthusian Regime to the Demographic Transition and Beyond”. American Economic Review 90(4), 806–828.

Hansen, G. and E. Prescott (2002). “Malthus to Solow”. American Economic Review 92(4), 1205–1217.

Jones, C. I. (2001). “Was an Industrial Revolution Inevitable? Economic Growth Over the Very Long Run”. Advances in Macroeconomics 1(2). Article 1.

Landers, J. (2003). The Field and the Forge: Population, Production, and Power in the Pre-Industrial West. New York: Oxford University Press.

Maddison, A. (2001). The World Economy. A Millennial Perspective. Paris: OECD.

McEvedy, C. and R. Jones (1978). Atlas of World Population History, Facts on File. New York.

Pomeranz, K. (2000). The Great Divergence: China, Europe, and the Making of the Modern World Economy. Princeton, N.J.: Princeton University Press.

Phelps-Brown, H. and S. V. Hopkins (1981). A Perspective of Wages and Prices. London. New York, Methuen.

Tilly, C. (1992). Coercion, Capital, and European States, AD 990-1992. Oxford: Blackwells.

Voigtländer, N. and H.-J. Voth (2008). “The Three Horsemen of Growth: Plague, War and Urbanization in Early Modern Europe”. CEPR discussion paper 7275.

Voigtländer, N. and H.-J. Voth (2006). “Why England? Demographic Factors, Structural Change and Physical Capital Accumulation during the Industrial Revolution”. Journal of Economic Growth 11, 319–361.

Wells, H. G. (1905). A Modern Utopia.

“Why had Nobody Noticed that the Credit Crunch Was on its Way?”

A letter to the Queen attempting to explain why economists missed the financial crisis:

Her Majesty The Queen
Buckingham Palace
London
SW1A 1AA

MADAM,

When Your Majesty visited the London School of Economics last November, you quite rightly asked: why had nobody noticed that the credit crunch was on its way? The British Academy convened a forum on 17 June 2009 to debate your question... This letter summarises the views of the participants ... and we hope that it offers an answer to your question.

Many people did foresee the crisis. However, the exact form that it would take and the timing of its onset and ferocity were foreseen by nobody. ...

There were many warnings about imbalances in financial markets... But the difficulty was seeing the risk to the system as a whole rather than to any specific financial instrument or loan. Risk calculations were most often confined to slices of financial activity, using some of the best mathematical minds in our country and abroad. But they frequently lost sight of the bigger picture.

Many were also concerned about imbalances in the global economy ... known as the ‘global savings glut’. ... This ... fuelled the increase in house prices both here and in the USA. There were many who warned of the dangers of this.

But against those who warned, most were convinced that ... the financial wizards had found new and clever ways of managing risks. Indeed, some claimed to have so dispersed them through an array of novel financial instruments that they had virtually removed them. It is difficult to recall a greater example of wishful thinking combined with hubris. There was a firm belief, too, that financial markets had changed. ... A generation of bankers and financiers deceived themselves and those who thought that they were the pace-making engineers of advanced economies.

All this exposed the difficulties of slowing the progression of such developments in the presence of a general ‘feel-good’ factor. Households benefited from low unemployment, cheap consumer goods and ready credit. Businesses benefited from lower borrowing costs. Bankers were earning bumper bonuses... The government benefited from high tax revenues... This was bound to create a psychology of denial. It was a cycle fuelled, in significant measure, ... by delusion.

Among the authorities charged with managing these risks, there were difficulties too. ... General pressure was for more lax regulation – a light touch. ...

There was a broad consensus that it was better to deal with the aftermath of bubbles ... than to try to head them off in advance. Credence was given to this view by the experience, especially in the USA ... when a recession was more or less avoided after the ‘dot com’ bubble burst. This fuelled the view that we could bail out the economy after the event.

Inflation remained low and created no warning sign of an economy that was overheating. ... But this meant that interest rates were low by historical standards. And some said that policy was therefore not sufficiently geared towards heading off ... risks. ... But on the whole, the prevailing view was that monetary policy was best used to prevent inflation and not to control wider imbalances in the economy.

So where was the problem? Everyone seemed to be doing their own job properly... And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. This, combined with the psychology of herding and the mantra of financial and policy gurus, lead to a dangerous recipe. Individual risks may rightly have been viewed as small, but the risk to the system as a whole was vast.

So in summary, Your Majesty, the failure..., while it had many causes, was principally a failure of the collective imagination of many bright people, both in this country and internationally, to understand the risks to the system as a whole. ...

We have the honour to remain, Madam,
Your Majesty’s most humble and obedient servants

Professor Tim Besley, FBA
Professor Peter Hennessy, FBA

[See also At your own risk and Economists were beholden to the long boom.]

Exchequer Tallies

The "first experiment with derivative financial instruments":

Theory of Games and Economic Misbehavior, by George Dyson, Edge: ...There are numerous precedents for [the derivatives now haunting us].

As early as the twelfth century it was realized that money ... can be made to exist in more than one place at a single time. An early embodiment of this principle, preceding the Bank of England by more than five hundred years, were Exchequer tallies — notched wooden sticks issued as receipts for money deposited with the Exchequer for the use of the king. "As a financial instrument and evidence it was at once adaptable, light in weight and small in size, easy to understand and practically incapable of fraud," wrote Hilary Jenkinson in 1911. ...

A precise description was given by Alfred Smee... "The tally-sticks were made of hazel, willow, or alder wood, differing in length according to the sum required to be expressed upon them. They were roughly squared, and one end was pointed; and on two sides of that extremity, the proper notches, showing the sum for which the tally was a receipt, were cut across the wood." 11

On the other two sides of the tally were written, in ink and in duplicate, the name of the party paying the money, the account for which it was paid, and the date of payment. The tally was then split in two, with each half retaining the notched information as well as one copy of the inscription. "One piece was then given to the party who had paid the money, for which it was a sufficient discharge," Smee continues, "and the other was preserved in the Exchequer. Rude and simple as was this very ancient method of keeping accounts, it appears to have been completely effectual in preventing both fraud and forgery for a space of seven hundred years. No two sticks could be found so exactly similar ... when split in the coarse manner of cutting tallies; and certainly no alteration of the ... notches and inscription could remain undiscovered when the two parts were again brought together. ..." 12

Exchequer tallies were ordered replaced in 1782 by an "indented cheque receipt," but the Act of Parliament (23 Geo. 3, c. 82) thereby abolishing "several useless, expensive and unnecessary offices" was to take effect only on the death of the incumbent who, being "vigorous," continued to cut tallies until 1826. "After the further statute of 4 and 5 William IV the destruction of the official collection of old tallies was ordered," noted Hilary Jenkinson. "The imprudent zeal with which this order was carried out caused the fire which destroyed the Houses of Parliament in 1834." 13

The notches were of various sizes and shapes corresponding to the tallied amount: a 1.5-inch notch for £1000, a 1-inch notch for £100, a half-inch notch for £20, with smaller notches indicating pounds, shillings, and pence, down to a halfpenny, indicated by a pierced dot. The code was similar to bar-coding... And the self-authentication achieved by distributing the information across two halves of a unique piece of wood is analogous to the way large numbers, split into two prime factors, are used to authenticate digital financial instruments today. Money was being duplicated: the King gathered real gold and silver into the treasury through the Exchequer, yet the tally given in return allowed the holder to enter into trade, manufacturing, or other ventures, producing real wealth with nothing more than a wooden stick.

Until the Restoration tallies did not bear interest, but in 1660, on the accession of Charles II, interest-bearing tallies were introduced. They were accompanied by written orders of loan which, being made assignable by endorsement, became the first negotiable interest-bearing securities in the English-speaking world. Under pressure of spiraling government expenditures the order of loan was soon joined by an instrument called an order of the Exchequer, drawn not against actual holdings but against future revenue and sold at a discount to the private goldsmith bankers whose hard currency was needed to prop things up. In January 1672, unable to meet its obligations, Charles II declared a stop on the Exchequer. At the expense of the private bankers, this first experiment with derivative financial instruments came to an end. ...

Wealth Inequality

Daniel Little on wealth inequality:

Wealth inequality, by Daniel Little: When we talk about inequality in the United States, we usually have a couple of different things in mind. We think immediately of income inequality. Inequalities of important life outcomes come to mind (health, housing, education), and, of course, we think of the inequalities of opportunity that are created by a group's social location (race, urban poverty, gender). But a fundamental form of inequality in our society is a factor that influences each of these: inequalities of wealth across social groups. Wealth refers to the ownership of property, tangible and intangible: for example, real estate, stocks and bonds, savings accounts, businesses, factories, mines, forests, and natural resources. Two facts are particularly important when it comes to wealth: first, that wealth is in general very unevenly distributed in the United States, and second, that there are very striking inequalities when we look at the average wealth of major social groups.

Edward Wolff has written quite a bit about the facts and causes of wealth inequality in the United States. A recent book, Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It, Second Edition, is particularly timely; also of interest is Assets for the Poor: The Benefits of Spreading Asset Ownership. Wolff summarizes his conclusion in these stark terms:

The gap between haves and have-nots is greater now--at the start of the twenty-first century--than at anytime since 1929. The sharp increase in inequality since the late 1970s has made wealth distribution in the United States more unequal than it is in what used to be perceived as the class-ridden societies of northwestern Europe. ... The number of households worth $1,000,000 or more grew by almost 60 percent; the number worth $10,000,000 or more almost quadrupled. (2-3)

The international comparison of wealth inequality is particularly interesting. Wolff provides a chart of the share of marketable wealth held by the top percentile in the UK, Sweden, and the US, from 1920 to 1992. The graph is striking. Sweden starts off in 1920 with 40% of wealth in the hands of the top one percent, and falls fairly steadily to just under 20% in 1992. UK starts at a staggering 60% (!) in the hands of the top 1 percent in 1920, and again, falls steadily to a 1992 level of just over 20%. The US shows a different pattern. It starts at 35% in 1920 (lowest of all three countries); then rises and falls slowly around the 30% level. The US then begins a downward trend in the mid-1960s, falling to a low of 20% in the 1970s; and then, during the Reagan years and following, the percent of wealth rises to roughly 35%. So we are roughly back to where we were in 1920 when it comes to wealth inequalities in the United States, by this measure.

Why does this kind of inequality matter?

Partly because significant inequalities of wealth have important implications for such things as the relative political power of various groups; the opportunities that groups have within and across generations; and the relative security that various individuals and groups have when faced with economic adversity. People who own little or nothing have little to fall back on when they lose a job, face a serious illness, or move into retirement. People who have a lot of wealth, by contrast, are able to exercise a disproportionate amount of political influence; they are able to ensure that their children are well educated and well prepared for careers; and they have substantial buffers when times are hard.

Wolff offers a good summary of the empirical data about wealth inequalities in the United States. But we'd also like to know something about the mechanisms through which this concentration of wealth occurs. Several mechanisms come readily to mind. People who have wealth have an advantage in gathering the information necessary to increase their wealth; they have networks of other wealth holders who can improve their access to opportunities for wealth acquisition; they have advantages in gaining advanced professional and graduate training that increase their likelihood of assuming high positions in wealth-creating enterprises; and they can afford to include high-risk, high-gain strategies in their investment portfolios. So there is a fairly obvious sense in which wealth begets wealth.

But part of this system of inequality of wealth ownership in the United States has to do with something else: the workings of race. The National Urban League publishes an annual report on "The State of Black America." One of the measures that it tracks is the "wealth gap" -- the differential in home ownership between black and white adults. This gap continues to persist, and many leaders in the effort towards achieving equality of opportunity across racial groups point to this structural inequality as a key factor. Here is a very good study on home ownership trends for black and white adults done by George Masnick at the Joint Center for Housing Studies at Harvard (2001). The gap in the 1990s fluctuated around 28% -- so, for example, in 1988-1998 about 52% of blacks between 45 and 54 were home owners, whereas about 80% of non-Hispanic whites in this age group were homeowners (figure 5). Historical practices of mortgage discrimination against specific neighborhoods influence home ownership rates, as do other business practices associated with the workings of residential segregation. Some of these mechanisms are illustrated in Kevin Kruse and Thomas Sugrue's The New Suburban History, and Kevin Boyle's Arc of Justice: A Saga of Race, Civil Rights, and Murder in the Jazz Age provides an absorbing account of how challenging "home ownership" was for professional black families in Detroit in the 1920s.

So what are the remedies for the very high level of wealth inequality that is found in the United States? Wolff focuses on tax remedies, and certainly these need to be a part of the story. But remedying the social obstacles that exist for disadvantaged families to gain property -- most fundamentally, disadvantages that derive from the educational opportunities that are offered to children and young people in inner-city neighborhoods -- is crucial as well. It seems axiomatic that the greatest enhancement that can be offered to a young person is a good education; and this is true in the question of wealth acquisition no less than the acquisition of other socially desirable things.

links for 2009-07-29

Where are the Technocratic Institutions?

Brad DeLong wonders why the response to the financial crisis hasn't included technocratic institutions to limit executive power:

Conservative Interventionism, by J. Bradford DeLong, Commentary, Project Syndicate: At this stage in the worldwide fight against depression, it is useful to stop and consider just how conservative the policies implemented by the world’s central banks, treasuries, and government budget offices have been. Almost everything that they have done – spending increases, tax cuts, bank recapitalisation, purchases of risky assets,... and ... money-supply expansions – has followed a policy path that is nearly 200 years old...

The place to start is 1825, when panicked investors wanted their money invested in safe cash rather than risky enterprises. Robert Banks Jenkinson, Second Earl of Liverpool and First Lord of the Treasury for King George IV, begged Cornelius Buller, Governor of the Bank of England, to act to prevent financial-asset prices from collapsing. “We believe in a market economy,” Lord Liverpool’s reasoning went, “but not when the prices a market economy produces lead to mass unemployment on the streets of London, Bristol, Liverpool, and Manchester.”

The Bank of England acted: it intervened in the market and bought bonds for cash, pushing up the prices of financial assets and expanding the money supply. It loaned on little collateral to shaky banks. It announced its intention to stabilize the market – and that bearish speculators should beware.

Ever since, whenever governments largely ... let financial markets work their way out of a panic out by themselves – 1873 and 1929 in the United States come to mind – things turned out badly. But whenever government stepped in or deputized a private investment bank to support the market, things appear to have gone far less badly. ... [F]ew modern governments are now willing to let financial market heal themselves. To do so would be a truly radical step indeed. The Obama administration and other central bankers and fiscal authorities around the globe are thus, in a sense, acting very conservatively... I ... am somewhat reluctant to second-guess them.  ...

Nevertheless, I do have one big question. The US government especially, but other governments as well, have gotten themselves deeply involved in industrial and financial policy during this crisis. They have done this without constructing technocratic institutions like the 1930’s Reconstruction Finance Corporation and the 1990’s RTC, which played major roles in allowing earlier episodes of extraordinary government intervention into the industrial and financial ... economy ... without an overwhelming degree of corruption and rent seeking. The discretionary power of executives, in past crises, was curbed by new interventionist institutions constructed on the fly by legislative action.

That is how America’s founders ... envisioned that things would work. They were suspicious of executive power, and thought that the president should have rather less discretionary power than the various King Georges of the time. ...

So I wonder: why didn’t the US Congress follow the RFC/RTC model when authorising George W. Bush’s and Barack Obama’s industrial and financial policies? Why haven’t the technocratic institutions that we do have, like the IMF, been given a broader role in this crisis? And what can we do to rebuild international financial-management institutions on the fly to make them the best possible?

Equity and Efficiency in Health Care Markets

This is an attempt to clarify a few of the remarks I've made over the last several days regarding the need for government intervention in health care markets.

There are two separate reasons to intervene, market failure and equity. Taking market failure first, there are a variety of failures in health care and insurance markets such as asymmetric information, market power, and principal agent problems. These can be solved by the private sector in some cases, but in others government intervention is required.

But even if the private sector or the government can solve the market failure problems adequately, there's no guarantee that the resulting distribution of health care services will be equitable. We don't expect the private sector to, for example, make sure that everyone can live on the coast and have an ocean view if they so desire, we use market prices to ration those goods, but we may want to make sure that everyone can get health care when they have serious illnesses. So equity considerations may prompt the government to intervene and bring about a different distribution of health care services than would occur with an efficient market.

I believe that economists have something to offer in both cases. In the first, economic theory offers solutions to market failures, and though not every market failure can be completely overcome, the solutions can guide effective policy responses. I prefer market-based regulation to command and control solutions whenever possible, i.e. I prefer that government create the conditions for markets to function rather than direct intervention. But sometimes the only solution is to intervene directly and forcefully.

In the second case, the idea is a bit different. Here, equity is the issue so somehow society must first designate the outcome it is trying to produce before economists can help to achieve it. Right now, it is my perception that the majority of people want to expand to universal or near universal coverage if we can do so without breaking the bank, and without reducing the care they are used to. If we can find a way to do that, the majority will come on board. If that's the case, if that's what we have collectively decided we want, then the job of the economists is to find the best possible way of achieving that outcome (or whatever outcome is desired) given whatever constraints bind the process (whether political realities should be part of the set of constraints is a point of contention, so I'll stay silent on that).

So if we are only concerned about efficiency, we do our best to resolve the market failures and leave it at that. We make sure, for example, that people have the information they need to make informed decisions about their care, that there aren't incentives that cause doctors to order too much or too little of some type of care or test, that monopoly power is checked, etc., etc. There's no guarantee that everyone will receive care, or that the distribution of care among those who do receive care will be as desired.

But if we are concerned with equity too - and most of us aren't comfortable watching people suffer when we know that help is readily available (perhaps nature imposes this externality upon us purposefully) - if we won't let people die on the street or suffer needlessly due to our sense of fairness and equity - then we will want to intervene to achieve broad based coverage in the least cost and fairest manner we can find (and their may be other equity issue that are important too).

Both reasons, equity and efficiency, can justify government intervention into health care markets. I think equity is of paramount importance when it comes to health care, so for me that is enough to justify government intervention, and the existence of market failure simply adds to the case that government intervention is needed.

So those opposed to government involvement in health care markets have to first argue that there is no market failure significant enough to justify intervention, a tough argument in and of itself, and also argue that people who, for example, go without insurance or cannot afford the basic care they need deserve no compassion whatsoever from society more generally. That's an argument I could never make even for those who could have paid for insurance but chose to take a chance they wouldn't need care, let alone for those who cannot afford it under any circumstances. I want everyone to be covered as efficiently as possible, and to be required to pay their fair share of the bill, whatever that might be, for the care that's made available to them.

What’s the Matter with the Blue Dogs?

Jacob Hacker wonders why the Blue Dogs oppose health care reform that could provide significant help to their constituents:

Health Care for the Blue Dogs, by Jacob S. Hacker, Commentary, Washington Post: The fate of health-care reform ... hinges on ... the ... "Blue Dogs" -- who are threatening to jump ship.

The main worry expressed by the Blue Dogs is that the ... leading bills ... won't bring down medical inflation. The irony is that the Blue Dogs' argument -- that a new public insurance plan designed to compete with private insurers should be smaller and less powerful, and that Medicare and this new plan should pay more generous rates to rural providers -- would make reform more expensive, not less. The further irony is that the federal premium assistance that the Blue Dogs worry is too costly ... would make health-care affordable for a large share of their constituents. ...

Increasing what doctors and hospitals are paid by the new public plan, as the Blue Dogs desire, would only raise premiums and health costs for their constituents. It would also fail to address excessive payments to hospitals and specialists...

Many Blue Dogs fret that a new public health insurance plan will become too large... Their concern should be that a public plan will be too weak. A public health plan will be particularly vital for Americans in the rural areas that many Blue Dogs represent. ...

Yet the Blue Dogs have mostly ignored the huge benefits of a new public plan for their districts. ... Right now, large swaths of farmers, ranchers and self-employed workers can barely afford a policy ... or are uninsured. They will benefit greatly from the premium assistance in the House legislation..., from additional subsidies for small businesses to cover their workers, and from a new national purchasing pool, or "exchange," giving those employers access to low-cost group health insurance that's now out of reach.

And given that Blue Dogs are worried about the ... cost of reform, they should applaud the House bill's requirement that all but the smallest of employers make a meaningful contribution to the cost of coverage. This will not just raise much-needed revenue..., it will also reduce the incentive for employers to drop coverage and let their workers go into the pool, increasing the size of the exchange and the public plan.

Blue Dogs have the future of health-care reform in their hands. If they hold firm to their principles of fiscal responsibility and effective relief for workers and employers in their districts, what's good for Blue Dogs will also be good for America.

Maybe their most important constituents aren't the voters in their districts?

links for 2009-07-28

Interconnectedness and the Distribution of Default Risk

I was asked what went wrong that caused economists to miss the financial crisis. For me, a key part of it was the belief in the risk distribution model. Let me give a simple example of how risk distribution works:

There are 100 people, each has $1,000 saved, and those balances are sitting idle, they have not been loaned out.

There are 100 different people who have loan projects that promise to pay more than simply putting the money in the bank (for simplicity, assume bank deposits earn no interest, but if they do, that won't change any of the conclusions drawn below). However, the default rate on these loans is 10%.

Suppose that the individuals with the accumulated savings are very risk averse. In particular, suppose that they only have this money temporarily, they will have their own bills to pay in the future (e.g. they will need to repay other types of loans), and they are just looking to put the money to work safely in the interim. If they lose any principal, they will go into default on the loans they need to repay in the future, and that's not a risk they are willing to take.

But this means no loans will be made. With a default rate of 10%, 10 of the 100 people will, in fact, lose everything, and that would mean going into default. Thus, without some means of sharing risk, none of them are not willing to risk losing all of their savings, at least not at an interest rate anyone would be willing to pay, and the market will not exist.

Now suppose that there are financial market intermediaries who come up with the following innovation to distribute risk. They will accept the deposits and pay 3.5% on them, and they will make loans at 15% (I'm assuming that the demand for these loans exists to avoid complicating things unnecessarily).

Let's see what happens if the savers take them up on their 3.5% offer, and then the deposits are lent at 15%. First, let's look at the original principal. There are 100 loans of $1,000 for a total value of (100)*($1,000) = $100,000. But not all of it is paid back. Subtract off the 10% of loans that default, i.e. subtract $10,000 leaving a payback of $90,000. So the original principal falls from $100,000 to $90,000 due to defaults (assuming a zero scrap value).

But the 15% interest rate is more than sufficient to cover the $10,000 loss so that nobody actually loses anything. To see this, the next step is to add interest to the $90,000 in good loans. Since 90 people pay back $150 in interest each, the interest return is $13,500, more than the $10,000 loss. Thus, the total amount paid back, with interest, is $103,500. Now divide this among the lenders, i.e. divide this by 100 to get $1,035 returned to each person who made a loan. Thus, with the risks distributed across all the lenders, instead of 10 people losing everything, everyone makes 3.5% (I didn't build bank profit into the example, but that's easy to do).

So in this example, rather than 10% of the lenders losing everything, a risk they won't take, they all make 3.5% on their investment. So long as the 10% default rate is accurate, this is a fairly certain return and they will be willing to enter the market.

(Note however that if the default rate turns out to be, say, 25% instead of 10%, then the lenders will lose principal, e.g. at 25% they are only repaid $8,625 each leaving an $1,375 shortage. This could cause them to default on their own loan payments, and that could in turn bring about more defaults in a spreading, domino style collapse.)

Before moving on to what I missed - I'm in no hurry to point that out - note one thing about this example. Risk distribution does not reduce risks overall. It does reduce the size of the risk that an individual faces - nobody loses everything unless every single loan defaults (with zero repayment in every case) - but overall the losses are still $10,000 whether individuals or intermediaries make the loans. There are ways in which financial intermediation can reduce overall risk, e.g. the expertise of the intermediaries at assessing risk is supposed to reduce the 10% default rate, and generally it would, I just didn't build this in. But the point is that risk distribution does what it says, it distributes risk, it does not reduce it. Many people misunderstood this.

O.K., here's where I went wrong, or one place anyway. I thought that default in the mortgage market would be like the default of these loans. The defaults would be distributed through complex financial products not just among U.S. lenders, but throughout the world, and that meant nobody would lose very much, certainly not enough to cause big problems. If problems developed, everyone would lose a little bit just like above. This belief was widespread among economists. The financial innovation driven by fancy mathematical models was supposed to assure that risk was widely distributed, and the insiders in these markets repeatedly reassured everyone that if problems did develop, they would be so widely dispersed that there was nothing to worry about.

But that's not what happened. Why? One reason is simple. The default rate was higher than expected, and that brought about unexpected losses. For example, above a 25% default rate means losses of $1,375 on the $10,000 investment leaving a shortage as this money is needed to repay other loans. But those losses still should have been widely dispersed, widely enough to avoid big problems.

But there's something else that explains how these losses spread to create such a big problem. The degree to which the people making the loans and taking out the loans were interconnected was misunderstood (that is, risks were more concentrated than we thought). The people borrowing and lending the money had far more financial interconnections than we noticed or knew about - there was a lot of borrowing and lending among them that was hidden or ignored - and when the higher than expected number of borrowers defaulted, that meant some of the people expecting payments from the lenders were forced into default as well. In the example above, remember that the lenders only had the money short-term, they would need the money later to repay their debts and were just trying to make something on the accumulated balances in the intervening period. But with losses of $1,375 rather than the anticipated gain, they are short on funds and hence must sell assets, call in loans, reduce consumption, etc. to try to accumulate sufficient cash balances to pay what they owe. But not everyone will be able to come up with the money they need, especially as asset prices fall as they are put up for sale, loans dry up, etc., and that will cause more defaults and the problems will spread. Thus, as lenders and everyone else try to rebuild what was lost so they can pay their own bills, that causes even more difficulty, and the result is more defaults on loans, and a process that feeds on itself in a downward spiral of defaults and further problems.

So a key thing I missed was the degree to which these markets are interconnected, and that may explain why I've emphasized finding better measures of interconnectedness, and then insulating markets against it as part of the reform process (and leverage is a key factor driving the interconnections).

Update to “A Breakthrough in the Fight against Hunger”

The post "A Breakthrough in the Fight against Hunger" summarizes Jeff Sachs' favorable view of the G-8’s $20bn initiative on smallholder agriculture (e.g. to provide assistance buying seed and fertilizer), and also gives Murat Iyigun's view of the type of developmental assistance advocated by many economists. Since Iyigun mentions Bill Easterly explicitly, and since Easterly and Sachs have an ongoing debate on this (and many other) issues, I promised an update if Bill Easterly responded. I just received this email:

Sachs mentions the lessons of history, but doesn't acknowledge the nearly universal agreement that past efforts at African Green Revolutions (with the same list of interventions that Sachs lists) have failed (see the documentation in my recent JEL article -- ungated version here). That doesn't mean giving up, but it does mean learning from history, trying to figure out why it failed in the past and correcting it -- why does Sachs find this idea so threatening?

On Iyigun's blog, I'm so happy to finally find somebody who gets it, that you shouldn't invade countries based on economists' crappy econometrics, that I have nothing else to add. I have had a lot more difficulty convincing people of this than I expected.

Adverse Selection

With health care reform in the news, there's been quite a bit of talk about adverse selection and the degree to which it is actually a problem in health care and health insurance markets. Some people have even gone so far as to question whether significant adverse selection effects exist at all outside of textbooks since when they look at the marketplace, they have a hard time finding it.

But the thing is, if you go looking for it in the marketplace, you aren't likely to find it. Unless the problem has been largely overcome either the government intervention or the through private sector institutions constructed to fix the problem (generally intermediaries who can solve the information problem that generates the market failure), the market will fail to exist at all. So you will either observe a fairly well-functioning market that has overcome the problem, or you won't see a market at all.

So if you want evidence of adverse selection, you should look for the institutions designed to overcome the problem - used car dealers with the expertise needed to  overcome the one-sided information problem on car quality, and then issue quality guarantees (or develop a reputation for quality) acting as intermediaries, that sort of thing - and those types of intermediaries are easy to find. Evidence of the institutions needed to overcome adverse selection - and evidence thqat the problem exists - aren't hard to find. Furthermore, very often government intervention isn't needed, the market can solve this on its own.

And the market will solve it on its own in the case of health care, but we may not like the solution the market comes up with. First, it violates our sense of equity since the solution will be to prevent people likely to have high health costs from getting insurance (or the price of insurance will be so high that they are effectively excluded). But we will still have to provide for them, we can't just abandon them to suffer when help can be provided. It's one thing if someone cannot sell their car due to market failure, it's quite another if they cannot get the medicine or care they need to maintain their health. So the private sector solution may not be morally acceptable. Second, because we have to provide for the sick in any case, the resources that are devoted to excluding people are wasted resources, all that happens is that the problem is shunted off to a generally more expensive option.
So it's not that the private sector cannot solve this problem at all, that's not why we need government to intervene, it's that the solution the market imposes violates our moral sensibilities and wastes resources that could be used more productively.

[On the run today and writing this sitting in my car in a parking lot. Mobility is getting better.]

One Dog, Two Dog, Red Dog, Blue Dog

Robert Waldmann explains Basic Football Terminology:

To Red Dog (alternative phrase for to blitz):

Linebacker crosses line of scrimmage attempting to sack opposing quarterback.

Often works, sometimes risky. Shows that player (and/or defensive coordinator) has guts.

To Blue Dog (alternative phrase for To Benedict Arnold):

Linebacker crosses own goal line and spikes own helmet.

Shows that player forgot which team he is on.

links for 2009-07-27

Paul Krugman: An Incoherent Truth

Paul Krugman rubs Blue Dog noses in the pile of incoherence they left in the House:

An Incoherent Truth, by Paul Krugman, Commentary, NY Times: Right now the fate of health care reform seems to rest in the hands of relatively conservative Democrats — mainly members of the Blue Dog Coalition, created in 1995. And you might be tempted to say that President Obama needs to give those Democrats what they want. But he can’t — because the Blue Dogs aren’t making sense. ...

Reform, if it happens, will rest on four main pillars: regulation, mandates, subsidies and competition. ... The subsidy portion of health reform would cost around a trillion dollars over the next decade..., this expense would be offset with a combination of cost savings elsewhere and additional taxes, so that there would be no overall effect on the federal deficit.

So what are the objections of the Blue Dogs? Well, they talk a lot about fiscal responsibility, which basically boils down to worrying about the cost of those subsidies. And it’s tempting to stop right there, and cry foul. After all, where were those concerns about fiscal responsibility back in 2001, when most conservative Democrats voted enthusiastically for that year’s big Bush tax cut — a tax cut that added $1.35 trillion to the deficit?

But it’s actually much worse than that — because even as they complain about the plan’s cost, the Blue Dogs are making demands that would greatly increase that cost.

There has been a lot of publicity about Blue Dog opposition to the public option, and rightly so: a plan without a public option ... would cost taxpayers more...

But Blue Dogs have also been complaining about the employer mandate, which is even more at odds with their supposed concern about spending. The Congressional Budget Office has already weighed in on this issue: without an employer mandate, health care reform would be undermined as many companies dropped their existing insurance plans, forcing workers to seek federal aid — and causing the cost of subsidies to balloon. It makes no sense at all to complain about the cost of subsidies and at the same time oppose an employer mandate.

So what do the Blue Dogs want?

Maybe they’re just being complete hypocrites. It’s worth remembering the history of one of the Blue Dog Coalition’s founders: former Representative Billy Tauzin of Louisiana. Mr. Tauzin switched to the Republicans soon after the group’s creation; eight years later he pushed through the 2003 Medicare Modernization Act, a deeply irresponsible bill that included huge giveaways to drug and insurance companies. And then he left Congress to become, yes, the lavishly paid president of PhRMA, the pharmaceutical industry lobby.

One interpretation, then, is that the Blue Dogs are basically following in Mr. Tauzin’s footsteps: if their position is incoherent, it’s because they’re nothing but corporate tools, defending special interests. And as the Center for Responsive Politics pointed out in a recent report, drug and insurance companies have lately been pouring money into Blue Dog coffers.

But I guess I’m not quite that cynical. After all, today’s Blue Dogs are politicians who didn’t ... switch parties even when the G.O.P. seemed to hold all the cards and pundits were declaring the Republican majority permanent. So these are Democrats who, despite their relative conservatism, have shown some commitment to their party and its values.

Now, however, they face their moment of truth. For they can’t extract major concessions on the shape of health care reform without dooming the whole project: knock away any of the four main pillars of reform, and the whole thing will collapse — and probably take the Obama presidency down with it.

Is that what the Blue Dogs really want to see happen? We’ll soon find out.