April 19, 2024

Today’s Economist: Uwe E. Reinhardt: What Hospitals Charge the Uninsured

DESCRIPTION

Uwe E. Reinhardt is an economics professor at Princeton. He has some financial interests in the health care field.

Steven Brill’s exposé on hospital pricing in Time magazine predictably provoked from the American Hospital Association a statement seeking to correct the impression left by Mr. Brill that the United States hospital industry is hugely profitable.

Today’s Economist

Perspectives from expert contributors.

In this regard, the association can cite not only its own regularly published data, but also data from the independent and authoritative Medicare Payment Advisory Commission, or Medpac, established by Congress to advise it on paying the providers of health care for treating Medicare patients.

As shown by Chart 6-19 of Medpac’s report from June 2012, “Health Care Spending and the Medicare Program,” the average profit margin (defined as net profit divided by total revenue) for the hospital industry over all is not extraordinarily high, although for a largely nonprofit sector I would rate it more than adequate.

Medicare Payment Advisory Commission

But in each year there is a large variance about that year’s average shown in Chart 6-19, with about 25 to 30 percent of hospitals reportedly operating in the red and many others earning margins below the averages.

The hospital association also correctly points out that under the pervasive price discrimination that is the hallmark of American health care, the profit margin a hospital earns is the product of a complicated financial juggling act among its mix of payers.

Payers with market muscle — for example, the federal Medicare and state Medicaid programs — can get away with paying prices below what it costs to treat patients (see, for example, Figure 3-5 and Table 3-4 in Chapter 3 of Medpac’s March 2012 report).

With few exceptions, private insurers tend to be relatively weak when bargaining with hospitals, so that hospitals can extract from them prices substantially in excess of the full cost of treating privately insured patients, with profit margins sometimes in excess of 20 percent.

Finally, uninsured patients — also called “self-pay” patients — have effectively no market power at all vis-à-vis hospitals, especially when they are seriously ill and in acute need of care. Therefore, in principle, they can be charged the highly inflated list prices in the hospitals’ chargemasters, an industry term for the large list of all charges for services and materials. These prices tend to be more than twice as high as those paid by private insurers.

To be sure, if uninsured patients are poor in income and assets, they usually are granted steep discounts off the list prices in the chargemaster. On the other hand, if uninsured patients are suspected of having good incomes and assets, then some hospitals bill them the full list prices in the chargemaster and hound them for these prices, often through bill collectors and even the courts.

It is noteworthy that in its critique of Mr. Brill’s work, the association statement is completely silent on this central issue of his report. A fair question one may ask leaders of the industry is this:

Even if one grants that American hospitals must juggle their financing in the midst of a sea of price discrimination, should uninsured, sick, middle-class Americans serve as the proper tax base from which to recoup the negative margins imposed on them by some payers, notably by public payers?

My answer is “No,” and I am proud to say that when luck put in my way an opportunity to act on that view, I did.

In the fall of 2007, Gov. Jon Corzine of New Jersey appointed me as chairman of his New Jersey Commission on Rationalizing Health Care Resources. On a ride to the airport at that time I learned that the driver and his family did not have health insurance. The driver’s 3-year-old boy had had pus coming out of a swollen eye the week before, and the bill for one test and the prescription of a cream at the emergency room of the local hospital came to more than $1,000.

By circuitous routes I managed to get that bill reduced to $80; but I did not leave it at that. As chairman of the commission, I put hospital pricing for the uninsured on the commission’s agenda.

After some deliberation, the commission recommended initially that the New Jersey government limit the maximum prices that hospitals can charge an uninsured state resident to what private insurers pay for the services in question. But because the price of any given service paid hospitals or doctors by a private insurer in New Jersey can vary by a factor of three or more across the state (see Chapter 6 of the commission’s final report), the commission eventually recommended as a more practical approach to peg the maximum allowable prices charged uninsured state residents to what Medicare pays (see Chapter 11 of the report).

Five months after the commission filed its final report, Governor Corzine introduced and New Jersey’s State Assembly passed Assembly Bill No. 2609. It limits the maximum allowable price that can be charged to uninsured New Jersey residents with incomes up to 500 percent of the federal poverty level to what Medicare pays plus 15 percent, terms the governor’s office had negotiated with New Jersey’s hospital industry.

I wouldn’t be surprised if the New Jersey hospital industry was cross at me and the commission for our role in the passage of Assembly Bill 2609. The commission took the view that it helped protect the industry’s image from some of its members’ worst instincts.

In that spirit, I invite the American Hospital Association to join me in urging federal lawmakers to pass a similar law for the nation. Evidently the mere guidelines on hospital pricing that the association published in 2004 have not been enough.

Indeed, in 2009 I had urged the designers of the Affordable Care Act to include such a provision in their bill — alas, to no avail. Courage to impose it on the industry had long been depleted.

Article source: http://economix.blogs.nytimes.com/2013/03/15/what-hospitals-charge-the-uninsured/?partner=rss&emc=rss

Today’s Economist: Simon Johnson: Big Banks Have a Big Problem

DESCRIPTION

Simon Johnson, former chief economist of the International Monetary Fund, is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

The largest banks in the United States face a serious political problem. There has been an outbreak of clear thinking among officials and politicians who increasingly agree that too-big-to-fail is not a good arrangement for the financial sector.

Today’s Economist

Perspectives from expert contributors.

Six banks face the prospect of meaningful constraints on their size: JPMorgan Chase, Bank of America, Citigroup, Wells Fargo, Goldman Sachs and Morgan Stanley. They are fighting back with lobbying dollars in the usual fashion – but in the last electoral cycle they went heavily for Mitt Romney (not elected) and against Elizabeth Warren and Sherrod Brown for the Senate (both elected), so this element of their strategy is hardly prospering.

What the megabanks really need are some arguments that make sense. There are three positions that attract them: the Old Wall Street View, the New View and the New New View. But none of these holds water; the intellectual case for global megabanks at their current scale is crumbling.

The Old Wall Street view is that there is nothing to see – big banks know what they are doing and pose no threat to the economy. This position was in complete ascendancy before 2007 but is seldom heard today. In part, of course, the financial crisis made this view seem more than a little hard to defend.

And any attempt to resurrect this position was completely sunk by the “London Whale” losses suffered by JPMorgan Chase last year. We’ll learn more in the hearing on Friday called by Senator Carl Levin of Michigan, although the chief executive, Jamie Dimon, was not asked to testify. Senator Levin’s Permanent Subcommittee on Investigations is also expected to issue a report.

All these details about the London Whale will reinforce the view that even one of our supposedly great risk managers, Mr. Dimon, can lose control of what is happening in his business – on a scale that can matter for overall profits and, potentially, for the economy.

The largest banks have become too complex to manage. And when they fail, the consequences are huge for all of us. This point is completely nailed by Anat Admati and Martin Hellwig’s new book, “The Bankers’ New Clothes.”

The New Wall Street view is that there is no too-big-to-fail subsidy. Or perhaps there is a subsidy but no one can measure it. Or perhaps someone can measure it, but not the people who have done so. If the first view ended in tragedy – the crisis and huge job losses of 2008 – this New View is simple comedy.

My colleagues at Bloomberg View have written a series of devastating editorials explaining for a broad audience the nature and likely scale of subsidies that very large banks receive. You should read the series, starting with the latest contribution this week, which includes useful links to previous salvos on both sides.

The reaction of the industry is running roughly parallel to how church officials originally responded to Galileo’s work. No doubt the bankers in question would like to compel Bloomberg View to renounce its opinions.

Fortunately, we have come a long way since 1633.

And the banks’ lobbyists are making an uncharacteristic mistake by digging in with this extreme and indefensible view. Ask people in the credit markets if they think lenders to the biggest banks have some degree of downside protection afforded by the government (including the Federal Reserve). I have never heard any reasonable investor deny this reality in private.

The big banks are well down the road to acknowledging that there may be a subsidy and the question is how to measure it – and how to assess all the available evidence.

All data are complex. The Federal Reserve changes monetary policy on the basis of numbers that are hard to know precisely. What exactly is “core inflation” or the “natural rate of unemployment” at this moment?

And whenever people try to bedazzle you with econometrics, go back to the simple numbers and see a powerful story: megabanks have a funding advantage, if you think about it properly and compare apples to apples. See, for example, this column by David Reilly in The Wall Street Journal this week. (Mr. Reilly endorses a position similar to one I have advanced with Sheila Bair and other colleagues.)

The New New Wall Street view is that too-big-to-fail exists but that Dodd-Frank will bring it under control. This argument remains the best hope for global megabanks, but even this perspective is now under severe pressure.

This position has some powerful adherents, including Ben Bernanke, chairman of the Federal Reserve, and Jerome Powell, a member of its Board of Governors. (I wrote about Mr. Powell’s views in this space last week and about Mr. Bernanke the week before).

The problem is that Mr. Bernanke clearly articulated, during the Dodd-Frank debate, that the big financial companies would be pressed to become smaller of their accord.

Three years later, there is no sign of actually happening.

And now come Richard Fisher and Harvey Rosenblum, knocking hard on the gates of Washington with an op-ed article in The Wall Street Journal on Monday, “How to Shrink the ‘Too-Big-to-Fail’ Banks.”

Mr. Fisher is a successful private-sector investor who now heads the Federal Reserve Bank of Dallas, where Mr. Rosenblum is also a senior official.

From the heart of the Federal Reserve System – and deeply steeped in private-sector experience – comes a clear statement that too-big-to-fail exists and Dodd-Frank did not end it.

Attorney General Eric Holder’s testimony to Congress last week also confirmed the latter point: some banks are so big that the Department of Justice is afraid to bring legal charges against them, for fear of how that would affect the economy. Senator Warren of Massachusetts continues to press this issue relentlessly and very effectively.

You should also listen to this Bloomberg radio interview with Arthur Levitt, who acknowledges “too big to jail” about two-thirds of the way through. Mr. Levitt, a former chairman of the Securities and Exchange Commission, is currently an adviser to Goldman Sachs, so I expect he’ll have to walk this statement back.

Most worrying for the big banks, Mr. Fisher is more broadly on the right of the political spectrum. On Friday, he will address the Conservative Political Action Conference. I’m not sure a senior Fed official has ever done this before.

Mr. Fisher is not only entirely correct. He is also on a completely convergent path with Senator Brown of Ohio. In fascinating new development on Wednesday, Bloomberg News reported more details on the Fisher-Rosenblum push for a hard size cap on big banks, which would force JPMorgan and Bank of America, for example, to become significantly smaller.

The executives who live well on subsidies at big banks should be very afraid.

The Fed cannot long resist the pressure to measure and assess too-big-to-fail subsidies. The Government Accountability Office is in the process of doing exactly this, at the request of Senators Brown and David Vitter, Republican of Louisiana. As Senator Vitter put it, “Despite the claims made by the paid cheerleaders of the megabanks, Too Big To Fail is alive and well, and the banks receive taxpayer subsidies.”

He went on to say: “Chairman Bernanke knows it, the market knows it, and the taxpayers know it,” adding that he thought the G.A.O. study would “get to the bottom of” the facts.

We’ll get a range of reasonable estimates. And they will all suggest the continuing presence of subsidies for financial companies that are perceived as too big to fail.

And then Mr. Fisher, Senator Brown and other sensible people can help us move toward policies that will impose binding size constraints on our largest financial institutions.

Article source: http://economix.blogs.nytimes.com/2013/03/14/big-banks-have-a-big-problem/?partner=rss&emc=rss

Today’s Economist: Casey B. Mulligan: Hidden Costs of the Minimum Wage

DESCRIPTION

Casey B. Mulligan is an economics professor at the University of Chicago. He is the author of “The Redistribution Recession: How Labor Market Distortions Contracted the Economy.”

The current federal minimum wage of $7.25 an hour is increasingly creating economic damage that needs to be considered with the benefits it might offer the poor.

Today’s Economist

Perspectives from expert contributors.

Democrats are now proposing to increase the federal minimum wage to $9 an hour. News organizations have repeatedly noted that economists do not agree on the employment effects of historical minimum-wage changes (the more recent federal changes in 2007, 2008 and 2009 have not yet been studied enough for us to agree or disagree on results specific to those episodes) and do not agree on whether minimum wage increases confer benefits on the poor.

That doesn’t mean that we economists disagree on every aspect of the minimum wage. We agree that minimum wages do some economic damage, although reasonable economists sometimes believe that the damage can be offset and even outweighed by benefits.

More important, we agree that the extent of that damage increases with the gap between the minimum wage and the market wage that would prevail without the minimum. A $10 minimum wage does less damage in an economy in which market wages would have been $9 than it would in an economy in which market wages would have been $2.

Moreover, elevating the wage $2 above the market does more than twice the damage of elevating the wage $1 above the market. (Employers can more easily adjust to the first dollar by asking employees to take more responsibility or taking steps to reduce turnover, steps that get progressively harder.) That’s why economists who favor small minimum wage increases do not call for, say, a $100 minimum wage, because at that point the damage would far outweigh the benefits.

Market wages normally tend to increase over time with inflation and as workers become more productive. As long as the minimum wage is a fixed dollar amount, the tendency for market wages to increase over time means that economic damage from the minimum wage is shrinking. That’s one reason that economists who see benefits of minimum wages would like to see minimum wages indexed to inflation, allowing the minimum wage to increase automatically as the economic damages fell.

But these are not normal times. The least-skilled workers are seeing their wages fall over time, largely because they are out of work and failing to acquire the skills that come with working. Moreover, the new health care regulations going into effect in January are expected to reduce cash wages, as many employers of low-skill workers are hit with per-employee fines of about $3,000 per employee per year, as the law mandates new fringe benefits for other employers and low-skill workers have to compete with others for the part-time jobs that are a popular loophole in the new legislation. (The minimum wage law restricts flexibility on cash wages, by establishing a floor, but makes no rule on fringe benefits.)

To keep constant the damage from the federal minimum wage, the federal minimum wage needs not an increase but an automatic reduction over the next couple of years in order for it to stay in parallel with market wages.


This post has been revised to reflect the following correction:

Correction: March 13, 2013

An earlier version of this post misstated the current federal minimum wage. It is $7.25 an hour, not $7.55.

Article source: http://economix.blogs.nytimes.com/2013/03/13/hidden-costs-of-the-minimum-wage/?partner=rss&emc=rss

Today’s Economist: Simon Johnson: Sheila Bair Could Fill a Fed Vacuum in Bank Supervision

DESCRIPTION

Simon Johnson, former chief economist of the International Monetary Fund, is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

President Obama should nominate Sheila Bair as vice chairwoman for supervision at the Federal Reserve. This position was created by the Dodd-Frank financial reform legislation (Section 1108 in Title XI) but has gone unfilled for nearly three years – clearly not in line with the intentions of legislators, who even specified that this official should testify before Congress on a semiannual basis.

Today’s Economist

Perspectives from expert contributors.

Ms. Bair is an experienced regulator, the former chairwoman of the Federal Deposit Insurance Corporation and the author of “Bull by the Horns,” an excellent book on what went wrong with the financial system and how to fix it. She currently works as an effective advocate for financial reform as a senior adviser at the Pew Charitable Trusts, and she founded and is chairwoman of the Systemic Risk Council (I am a member of this council).

No one has stronger bipartisan support and a better working relationship with reasonable people across Washington. There is now a potential vacancy on the Fed board, as Elizabeth Duke’s term as governor expired more than a year ago; she continues to serve until a successor is appointed. The reformist wing at the Fed desperately needs serious reinforcement.

And the Federal Reserve itself faces a broader impending crisis of legitimacy if it fails to decisively confront the problem of too big to fail. Anyone who believes in the importance of an independent central bank should work hard to force the Fed to act decisively on big banks.

Since the fall, Daniel Tarullo, an academic expert on banking and currently the lead Fed governor for financial regulation, has given some encouraging speeches, including a discussion of the need to impose effective size caps on the largest banks. He is also known to be in favor of stronger capital requirements than some of his Fed colleagues. And he is much more likely to push for bank holding companies to finance themselves with enough equity and long-term subordinated debt to serve as a loss-absorber in the case of financial failure.

But Mr. Tarullo has not yet made much discernible progress – and there is a real danger that the Fed will slip into a wait-and-see policy that will lead nowhere.

When questioned by Senator Elizabeth Warren, Democrat of Massachusetts, in a recent hearing, Ben Bernanke, the Fed chairman, was vague and unconvincing on the importance of additional measures to reduce the dangers posed by “too big to fail” banks.

And I was further discouraged on Monday by the wording of a speech by Jerome Powell, a relatively new member of the Fed’s Board of Governors. Mr. Powell’s line is that while “success is not assured” for existing measures that aim to end “too big to fail,” we should merely study additional steps (such as size caps; see Pages 12-13 of his speech).

Mr. Powell did not deny the existence of too-big-to-fail subsidies – and this puts important distance between his views and those expressed, for example, by people at JPMorgan Chase. But he did not suggest that the Fed should measure the subsidies these institutions receive – or the dangers that they pose. In effect he and his colleagues are saying, “Trust us, and we’ll continue to provide you with vague answers and promises of future progress.”

There is a more general level of vagueness in Mr. Powell’s speech that is deeply disconcerting (and I felt the same way about Mr. Bernanke’s exchange with Senator Warren). When will the Fed board decide that the “too big to fail” reform project is insufficient and on what basis? It’s already nearly three years since the passage of Dodd-Frank, yet the implicit subsidies and general unfair competitive advantages of megabanks show no signs of receding.

Will the “living will” process – in which banks are supposed to present plans for their own demise in the event of big losses – be used to bring effective pressure to bear on the big banks, as Thomas Hoenig of the Federal Deposit Insurance Corporation proposes? There is no sign of this in Mr. Powell’s view of the world, despite the fact that even William Dudley, the president of the Federal Reserve Bank of New York, views the first round of living wills (submitted last year) as inadequate.

In fact, global megabanks as currently constituted could in no way prepare a credible living will; they are simply too complex for their own management to understand – as demonstrated by JPMorgan Chase’s losses of more than $6 billion in the so-called London Whale case. The materials involved in a full disclosure would fill a small college library, and most of them would prove irrelevant once liquidation started. And any such document can immediately become out of date; a day of intense derivative trading can significantly change the risk profile of a big company, and the nature of the systemic risks that it poses.

Mr. Powell is an experienced financial sector executive and, behind closed doors, most such people are candid about the problems of megabanks. I’m sure that Mr. Powell’s intentions are good, but perhaps inadvertently he is siding with industry people who seek to use delaying tactics at every opportunity.

For example, Mr. Powell referred to the resolution of global megabanks without sufficiently emphasizing the inherent difficulties of cross-border resolution (see Pages 9-10 of his speech). Again, I am confident that Mr. Powell is fully informed about the lack of progress in creating an integrated resolution regime between Britain and the euro zone, as well as the serious difficulties within the euro zone itself.

Citigroup brags about having 200 million customer accounts in 160 countries and processing $3 trillion in transactions every day. How exactly does Mr. Powell or anyone else propose to unwind that when the cross-border issues are profoundly complicated and it’s every regulator for himself? Citigroup, after all, falls into a state of insolvency roughly every decade.

As for the cooperation between the F.D.I.C. and the Bank of England, to which Mr. Powell refers (again, see Pages 9-10), this is a valiant effort. But I have talked in detail with Paul Tucker, the relevant deputy governor at the Bank of England (including at the public meeting of the Systemic Resolution Advisory Committee of the F.D.I.C. in December). The Bank of England has a fundamentally different approach to bank resolution than that of the F.D.I.C. Both sides, by statute and under political pressure, must keep their options open.

In any crisis, national interest comes first, which means the potential for an unseemly scramble by regulators to grab assets. This is exactly how markets become destabilized.

Mr. Powell refers (see Page 6) to a “public simulation of the failure of large financial institution” under an ordinary liquidation agreement that he helped design in October 2011. And he draws encouragement from that experience, which refers to the simulation organized by The Economist at its Buttonwood conference. But according to top-level participants in that exercise, with whom I have spoken, the cross-border piece was designed to be trivial, with the relevant regulators presumed willing and able to cooperate, because the event had to fit within a two-hour time slot. The inherent complexity of modern cross-border banking was simply assumed away in the interest of making for good viewing.

This is not a serious way to think about financial crises.

If Mr. Powell were serious about ending too big to fail, he would express in public his support for efforts to measure the subsidies that these financial institutions continue to receive. The Fed is perfectly capable of calculating and publishing numbers that would be informative regarding whether these subsidies are rising or failing.

As I mentioned here last week, in the early 2000s the Fed staff calculated the federal subsidies to Fannie Mae and Freddie Mac at around 40 basis points (I cited a 2003 paper and a 2004 speech by Alan Greenspan, then the Fed chairman). Why not do analogous calculations for today’s too-big-to-fail financial institutions? I understand that Mr. Powell is not in a position to tell the Fed staff what to do, but calling for more precise measurement and public reporting on subsidies is hardly expressing a radical thought.

Any excuse that such subsidies are hard to measure (which is what I hear from the industry, not from Mr. Powell) is inherently lame. In fact, it would be laughable if that were ever to come from the Fed. Is inflation or the “natural rate” of unemployment easy to measure? The Fed is in the business of filtering out noise and understanding fundamentals.

Of course, the Fed did a very bad job across almost all dimensions of its mandate before the crisis of 2008. That is exactly why its legitimacy is called into question. It is baffling that so many Fed insiders refuse to understand how another financial crisis, centered on unfettered too-big-to-fail financial companies, would undermine the fragile political consensus that currently supports central-bank independence.

Broader political momentum is most definitely shifting toward confronting the power of very large financial institutions. On Tuesday, I was chairman of an event on “too big to fail” at the Peterson Institute for International Economics with Sheila Bair; Senator Sherrod Brown, Democrat of Ohio; and Jon Huntsman, the former governor of Utah and Republican presidential candidate. (You can watch the video on the institute’s Web site.)

Senator Brown made a strong case, from the left, for constraining the size of very large banks. Mr. Huntsman made a convergent case, from a conservative perspective, for measuring and ending the subsidies these financial institutions receive to maximize economic growth. Ms. Bair provided the hands-on, detailed view about how to move our current rules decisively and appropriately in the right direction.

Yet on Wednesday, Attorney General Eric Holder acknowledged that some of the largest financial institutions may have become too big to prosecute. “I am concerned that the size of some of these institutions becomes so large that it does become difficult for us to prosecute them when we are hit with indications that if you do prosecute, if you do bring a criminal charge, it will have a negative impact on the national economy, perhaps even the world economy,” he said in testimony to the Senate Judiciary Committee, reported on thehill.com blog. “And I think that is a function of the fact that some of these institutions have become too large.”

Still, the Obama administration says it wants to promote women to positions of responsibility. Ms. Bair is not just the best woman for the job of supervision at the Fed; she is by far the best-qualified person. Is this White House serious about financial reform, or is it just paying lip service?

Article source: http://economix.blogs.nytimes.com/2013/03/07/filling-a-fed-vacuum-in-bank-supervision/?partner=rss&emc=rss

Today’s Economist: Uwe E. Reinhardt: Measuring the ‘Quality’ of Health Care

DESCRIPTION

Uwe E. Reinhardt is an economics professor at Princeton. He has some financial interests in the health care field.

“In his writings, an Italian sage says that the best is the enemy of the good,” wrote Voltaire. We have updated that to the common adage, “Let not the perfect become the enemy of the good!”

Today’s Economist

Perspectives from expert contributors.

This dictum came to mind as I read the responses of various Doubting Thomases to my previous post on the quality of health care under the two Medicare options: traditional Medicare and Medicare Advantage.

These readers appear to harbor genuine doubt that quality in health care can ever be properly defined and measured. But what is the alternative — just relying on anecdotes and word of mouth, or the assurances from health care providers that they provide the highest quality of health care in the world?

It is, to be sure, challenging to measure the quality of any human-service sectors, be it health care, education, the administration of the law or even corporate management. That is why anecdotes and word of mouth remain important signals that attract or repel individuals from particular products or institutions.

But flight once seemed impossible, too, perhaps even after the Wright brothers’ first flight. “No flying machine will ever fly from New York to Paris,” Orville Wright famously said, because “no known motor can run at the requisite speed for four days without stopping.” Wright also offered the thought that “if we worked on the assumption that what is accepted as true really is true, then there would be little hope for advance.”

The large and growing cadre of clinicians and measurement scientists engaged in measuring quality in health care can find inspiration in aviation. They persist, and they have registered much more progress in recent decades than might be imagined — much more, for example, than has been achieved in other human-services sectors, notably education, not to mention what we call the administration of “justice.”

To appreciate the challenge posed by health care, let us review the huge terrain within which quality in health care can be monitored, an issue I touched upon two years ago in connection with “pay for performance.”

In that post, I presented a map of that terrain, reproduced here in modified form to highlight the three distinct though connected production processes in health care, as we economists put it:

(a) the production of health care (the gray area)
(b) the production of health (the blue area)
(c) the production of human well-being (the pink area)

The ambition of measurement science devoted to quality in health care is to develop reliable and operational measures to monitor each of these production processes. It will be a quest that will last decades, and admittedly has only just begun.

The quality of health care production has naturally attracted most attention.

In the health care production process, quality can be monitored on several facets:

• The characteristics of the purchased inputs used in production of health care — e.g., the training of health personnel, the sophistication of the equipment supporting health professionals or the degree to which the architecture of facilities encourages or hinders patient-centered health care;

• The structure within which health care production takes place — e.g., the degree to which the production of health care is clinically integrated, including the electronic information technology that enhances or hinders that integration;

• The treatment processes for particular medical conditions — e.g., degree of adherence to known best clinical practices (expressed in practice guidelines and clinical pathways derived from these guidelines), processes that avoid hospital-generated infections and avoid re-admissions that could have been avoided, and so on;

• The impact of medical interventions on the patients’ health and well-being in the short and long run, often referred to simply as “outcomes” — e.g., survival rates by time periods, functional status, pain and so on;

• And, very important, satisfaction of patients with the treatment processes they have experienced, measured by means of surveys, ideally not administered by providers themselves.

This particular division of quality metrics goes back to a classic paper on the quality of health care published in 1966 by Dr. Avedis Donabedian, a distinguished physician and a towering figure in the field of quality measurement who died in 2000.

A wise thing to say in casual conversation is that “outcome” is all that matters in measuring the quality of health care. Presumably, “outcome” includes clinical outcome and patient satisfaction. Experts in quality measurement agree in principle. In practice, however, they warn that “outcome” is a complex metric.

First, clinical outcome usually is multidimensional. It may even involve a trade-off between longevity and quality of life.

Second, as is shown in the next chart, which enlarges the health-production process, health care proper is merely an input in the production of health. To measure strictly the impact of a medical intervention on the patient’s health, one has to control statistically for all of these other health-producing inputs, including the patient’s compliance with, say, prescribed drug therapy, a perennial problem in health care.

Health care proper makes two inputs into the production of health: there may be intervention in the patient’s physiology – e.g., surgery, drug therapy, physical therapy or other direct interventions. But high-quality primary care also includes management-consulting services devised to help or persuade patients to manage their own health better — e.g., counseling on controlling blood pressure through methods besides drug therapy, nutrition, smoking cessation, weight management and so on. Modern metrics of quality monitoring always include a good number of metrics on these consulting services.

Efforts to hold health care providers formally accountable for the quality of their care are rarely one-metric systems. Instead, they resemble a final examination in a college course, with scores on many different questions, each with a relative weight, which are then totaled as a weighted sum to produce the final overall grade.

Quality monitoring I have seen from private insurers — e.g., Wellpoint Inc. or Massachusetts Blue Cross Blue Shield, to name but two — usually have scores on all of the several facets of quality enumerated above.

Ideally, it is these weighted sums that should be used in the kind of comparative analyses I mentioned in previous posts, rather than just hospital re-admissions. So far, these weighted aggregate measures have not been readily available to researchers — hence their reliance on single metrics on which data are available. One must hope that better data will soon be made available to researchers.

Article source: http://economix.blogs.nytimes.com/2013/02/01/measuring-the-quality-of-health-care/?partner=rss&emc=rss

Today’s Economist: Casey B. Mulligan: The Health Care Law and Retirement Savings

DESCRIPTION

Casey B. Mulligan is an economics professor at the University of Chicago. He is the author of “The Redistribution Recession: How Labor Market Distortions Contracted the Economy.”

Because of its definition of affordability, beginning next year the Affordable Care Act may affect retirement savings.

Today’s Economist

Perspectives from expert contributors.

Employer contributions to employee pension plans are exempt from payroll and personal income taxes at the time that they are made, because the employer contributions are not officially considered part of the employee’s wages or salary (employer health insurance contributions are treated much the same way). The contributions are taxed when withdrawn (typically when the worker has retired), at a rate determined by the retiree’s personal income tax situation.

Employees are sometimes advised to save for retirement in this way in part because the interest, dividends and capital gains accrue without repeated taxation. In addition, people sometimes expect their tax brackets to be lower when retired than they are when they are working.

These well-understood tax benefits of pension plans will change a year from now if the act is implemented as planned. Under the act, wages and salaries of people receiving health insurance in the law’s new “insurance exchanges” will be subject to an additional implicit tax, because wages and salaries will determine how much a person has to pay for health insurance.

While much about the Affordable Care Act is still being digested by economists, they have long recognized that high marginal tax rates lead to fringe benefit creation. And the Congressional Budget Office has concluded that the act will raise marginal tax rates.

Were an employer to reduce wages and salaries (or fail to increase them) and compensate employees by introducing an employer-matching pension plan, the employee is likely to benefit by receiving additional government assistance with his health-insurance costs. The pension contributions will add to the worker’s income during retirement, except that the income of elderly people does not determine health-insurance eligibility to the same degree, because the elderly participate in Medicare, most of which is not means-tested.

Take, for example, a person whose four-member household would earn $95,000 a year if his employer were not making contributions to a pension plan or did not offer one. He would be ineligible for any premium assistance under the Affordable Care Act because his family income would be considered to be about 400 percent of the poverty line.

If instead the employer made a $4,000 contribution to a pension plan and reduced the employee’s salary so that household income was $91,000, the employee would save the personal income and payroll tax on the $4,000 and would become eligible for about $2,600 worth of health-insurance premium assistance under the act. (The employer would come out ahead here, too, by reducing its payroll tax obligations).

Even though the Affordable Care Act is known as a health-insurance law, in effect it could be paying for a large portion of employer contributions to pension plans. This has the potential of changing retirement savings and the relative living standards of older and working-age people.

Article source: http://economix.blogs.nytimes.com/2013/01/30/the-health-care-law-and-retirement-savings/?partner=rss&emc=rss

Today’s Economist: Laura D’Andrea Tyson: Why the Unemployment Rate Is So High

DESCRIPTION

Laura D’Andrea Tyson is a professor at the Haas School of Business at the University of California, Berkeley, and served as chairwoman of the Council of Economic Advisers under President Bill Clinton.

According to the last jobs report for 2012, the United States labor market continues to recover at a steady but modest pace despite a global slowdown, Hurricane Sandy and anxieties about future fiscal policy. Private payrolls increased by two million in 2012, and the unemployment rate fell by 0.7 percentage point to 7.8 percent. Over the last 34 months, the economy has added 5.8 million jobs.

Today’s Economist

Perspectives from expert contributors.

But that leaves a four million shortfall in employment relative to its 2007 peak. And the jobs gap, the number of jobs necessary to return to this peak and cover the growth in the labor force since then, is stuck around 11 million. The labor market is still far from full recovery, with a tremendous waste of human talent and a personal toll on unemployed workers and their families.

This year is likely to be more of the same, as the deal on the fiscal cliff — the American Taxpayer Relief Act — will take about 0.4 to 0.6 percent off the economy’s growth rate.

Additional cuts in government spending later this year, above those already emanating from the cap on discretionary spending, would further restrain job creation. Proven policies to increase aggregate spending and near-term job growth, like the continuation of payroll tax relief and infrastructure investment, appear to be off the table. That’s a mistake, because weak demand and slow growth of gross domestic product are the primary factors behind the tepid pace of job creation.

Despite anecdotes about how employers cannot find workers with the skills they need, there is little evidence that the unemployment rate remains elevated because of mismatches between the skill requirements of available jobs and the skills of the unemployed.

When the recession hit in 2008, unemployment rates soared in every industry. As usual during recessions, mismatches between employer needs and worker skills also increased temporarily, reflecting greater churn in the labor market as workers were forced to move across industries and occupations.

But industrial and occupational mismatch measures are now back to their prerecession levels, indicating that the overall unemployment rate is high because unemployment rates remain high across all industries and most skill groups, not because of a growing skills gap relative to the gap that existed before the recession.

Edward P. Lazaer and James Spletzer, “The United States Labor Market: Status Quo or a New Normal?” National Bureau of Economic Research, September 2012, with data from the Conference Board.

The unemployment rates for all workers at all education levels jumped during the recession and have not recovered to prerecession levels. Even before the recession, the unemployment rates for workers with a high-school education or less were much higher than those for workers with a college education or higher. And there were high vacancy rates and low unemployment rates for professional occupations, while many service and blue-collar occupations had low vacancy rates and high unemployment rates. These structural differences persist but are no larger than they were before the recession.

Increases in educational attainment levels and effective training programs would ameliorate such differences and the growing wage inequality they have generated. They would also facilitate the movement of workers among industries and occupations, making the labor market work better and reducing the structural unemployment rate from industrial and occupational mismatches.

Alas, state funds for such programs have been slashed and federal funds will probably get an additional haircut later this year, even if the debilitating cuts in the sequester are averted as part of a long-run budget deal.

Another feature of the current recovery is the long duration of unemployment for many workers. At the end of last year, 4.8 million Americans were unemployed for 27 weeks or more, and their share in the total number of unemployed workers fell to 39 percent after peaking at 45.5 percent in March 2011 and exceeding 40 percent for 31 consecutive months. The previous peak was a far lower 26 percent in 1983, at a time when the unemployment rate was about as high as it is now.

Center on Budget and Policy Priorities, using data from Bureau of Labor Statistics and National Bureau of Economic Research

Moreover, the number of workers who are grappling with long-term job loss is probably far larger than the official number of long-term unemployed, as it does not include 1.1 million discouraged workers who want a job but are not currently looking for work, and many of the 1.7 million workers who have joined disability rolls because they cannot find a job.

Why is the long-term unemployment problem so much more severe in this recovery? Part of the answer lies in the fact that the loss of jobs in the 2008-9 recession was more than twice as large as in previous recessions and the pace of gross domestic product growth during the recovery has been less than half the average of previous recoveries.

The relationship between the vacancy rate and the unemployment rate — the so-called Beveridge curve — suggests other forces are at work as well. When the vacancy rate rises, the unemployment rate usually falls along a path that has remained quite stable over long periods of time, including the 2001 recession and subsequent recovery.

But a recent study finds that during the current recovery the normal relationship has broken down for the long-term unemployed — the increase in the vacancy rate has produced a smaller-than-expected decline in the long-term unemployment rate. In contrast, the usual relationship between the vacancy rate and the unemployment rate has held for those unemployed for fewer than 27 weeks.

There are several reasons that the long-term unemployed are not benefiting as much as the short-term unemployed from the increase in job vacancies as the economy recovers. Many long-term unemployed may not have the qualifications required for posted job vacancies, and the longer they are unemployed, the more their skills become obsolete and their actual or perceived employability erodes. To make matters worse, the longer workers are unemployed, the more skeptical employers become about their employability and work habits. Another recent study found that the likelihood that a job applicant receives a call-back for an interview significantly decreases with the duration of his or her unemployment.

In addition, many jobs are filled through contacts and informal networks, and the longer workers are unemployed, the weaker their contacts with potential employers and the less information they have about job opportunities.

Some long-term unemployed may also be searching less intensively or may be less willing to accept job offers during this recovery, in part because good jobs are so much harder to find and because unemployment benefits last longer and are more generous than in previous recoveries.

During his first term, President Obama proposed several initiatives to reduce long-term unemployment, including more flexibility for states to use unemployment funds for training and placement programs, a tax credit to businesses to hire workers out of a job for more than six months, and an $8 billion fund to support training and job placement at community colleges.

These programs failed to win Congressional approval, and they have dropped out of the debate as Washington’s focus has shifted from job creation to debt reduction.

The economic evidence is compelling. The high unemployment rate is the result of weak demand, not structural mismatches. And the longer workers are unemployed, the more their skills, contacts and links to the labor market atrophy, the less likely they are to find a job and the more likely they are to drop out of the labor force.

As a result, what is currently a temporary long-term unemployment problem runs the risk of morphing into a permanent and costly increase in the unemployment rate and a permanent and costly decline in the economy’s potential output. That’s what the Federal Reserve is worried about. It’s too bad that more members of Congress don’t share this concern.

Article source: http://economix.blogs.nytimes.com/2013/01/11/why-the-unemployment-rate-is-so-high/?partner=rss&emc=rss

Today’s Economist: Simon Johnson: Last-Ditch Attempt to Derail Volcker Rule

DESCRIPTION

Simon Johnson is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

In a desperate attempt to prevent implementation of the Volcker Rule, representatives of megabanks are resorting to some last-minute scare tactics. Specifically, they assert that the Volcker Rule, which is designed to reduce the risks that such banks can take, violates the international trade obligations of the United States and would offend other member nations of the Group of 20. This is false and should be brushed aside by the relevant authorities.

Today’s Economist

Perspectives from expert contributors.

The Volcker Rule was adopted as part of the Dodd-Frank financial reform legislation in 2010. The legislative intent was, at the suggestion of Paul A. Volcker (the former chairman of the Federal Reserve Board of Governors), to limit the kinds of risk-taking that very large banks could undertake. In particular, the banks are supposed to be severely limited in terms of the proprietary bets that they can make, to lower the probability they can ruin themselves and inflict great damage on the rest of society. (For a primer and great insights, see this commentary by Alexis Goldstein, a leader of Occupy the S.E.C.)

The Volcker Rule is almost finished winding its way through the regulatory process, and a version should be implemented soon. But in a last-ditch attempt to block it, the United States Chamber of Commerce has sent a letter to the United States Trade Representative asserting:

The Volcker Rule is discriminatory, as foreign sovereign debt is subject to the regulation, while Unted States Treasury debt instruments are exempt. This creates a discord in G20 and invites foreign governments to retaliate at a time when we need those same regulators in foreign countries to support initiatives to liberalize trade in financial services. Further, U.S.T.R. should conduct a very close examination to ensure the Volcker Rule does not violate any of our trade obligations.

This statement is correct with regard to the point that there are exemptions in the current version of the Volcker Rule for banks’ holdings of United States government debt, i.e., there are fewer restrictions on their holdings of Treasury obligations than on their holdings of foreign government debt.

But the idea that this violates the spirit or letter of our international obligations is flatly wrong. Perhaps that is why the letter doesn’t point to any particular provisions of any specific trade agreements.

As a matter of basic principle, there is no violation, because there is no provision in any trade agreement that says United States banking regulators can’t protect our financial system by engaging in prudent regulation. To the contrary, nations have always been allowed to restrict what their banks can regard as safe assets, and thus effectively to limit their holdings of foreign assets.

Think of it this way. Would we want United States banking regulators to be prohibited from distinguishing between United States debt and that of Greece, Ireland, Spain or Italy?

In practice, this distinction among countries already occurs. For example, the Basel II equity capital requirements allow every country to treat the debt of other governments with some caution (although, without doubt, more caution is needed than was actually used in the past, or even than is encouraged under the new Basel III agreement).

Some Canadian officials, for example, have said that Canadian government debt should receive equal treatment with United States government debt. This is a dangerous proposal. Canada has ridden the recent commodity price boom and, to many observers, its real estate looks pricey. Do Canadian banks have enough loss-absorbing capital to weather whatever storms lie ahead – if China slows down or energy prices fall for some other reason? They had trouble in the 1990s, when commodity prices fell sharply. Why should American regulators allow our banks to take on a huge amount of Canadian risk?

Markets love a country until five seconds before they hate it. Surely we should have learned that by now, including from the European crisis.

We should continue to regard euro-zone debt with great suspicion. The euro-zone sovereign debt crisis may be over, and Greece’s bond rating was upgraded sharply by Standard Poor’s this week. On the other hand, S.P. and other ratings agencies have been wrong – and to a spectacular degree – in the not-too-distant past, including being overly optimistic about European sovereign debt and residential mortgages in the United States.

The Volcker Rule, and its international counterparts, like “ring fencing,” are forms of re-regulation, to be sure. Based on harsh recent experiences, countries are backing away from letting their banks and other people’s banks run unfettered around the world, taking on whatever risks they like and getting themselves into complicated legal and financial difficulties.

We need to reduce excessive and irresponsible risk taking throughout our financial system. The Volcker Rule is a significant step in the right direction. It is time for the regulators to finish the job.

Article source: http://economix.blogs.nytimes.com/2012/12/20/last-ditch-attempt-to-derail-volcker-rule/?partner=rss&emc=rss

Economix Blog: Simon Johnson: Last-Ditch Attempt to Derail Volcker Rule

DESCRIPTION

Simon Johnson is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

In a desperate attempt to prevent implementation of the Volcker Rule, representatives of megabanks are resorting to some last-minute scare tactics. Specifically, they assert that the Volcker Rule, which is designed to reduce the risks that such banks can take, violates the international trade obligations of the United States and would offend other member nations of the Group of 20. This is false and should be brushed aside by the relevant authorities.

Today’s Economist

Perspectives from expert contributors.

The Volcker Rule was adopted as part of the Dodd-Frank financial reform legislation in 2010. The legislative intent was, at the suggestion of Paul A. Volcker (the former chairman of the Federal Reserve Board of Governors), to limit the kinds of risk-taking that very large banks could undertake. In particular, the banks are supposed to be severely limited in terms of the proprietary bets that they can make, to lower the probability they can ruin themselves and inflict great damage on the rest of society. (For a primer and great insights, see this commentary by Alexis Goldstein, a leader of Occupy the S.E.C.)

The Volcker Rule is almost finished winding its way through the regulatory process, and a version should be implemented soon. But in a last-ditch attempt to block it, the United States Chamber of Commerce has sent a letter to the United States Trade Representative asserting:

The Volcker Rule is discriminatory, as foreign sovereign debt is subject to the regulation, while Unted States Treasury debt instruments are exempt. This creates a discord in G20 and invites foreign governments to retaliate at a time when we need those same regulators in foreign countries to support initiatives to liberalize trade in financial services. Further, U.S.T.R. should conduct a very close examination to ensure the Volcker Rule does not violate any of our trade obligations.

This statement is correct with regard to the point that there are exemptions in the current version of the Volcker Rule for banks’ holdings of United States government debt, i.e., there are fewer restrictions on their holdings of Treasury obligations than on their holdings of foreign government debt.

But the idea that this violates the spirit or letter of our international obligations is flatly wrong. Perhaps that is why the letter doesn’t point to any particular provisions of any specific trade agreements.

As a matter of basic principle, there is no violation, because there is no provision in any trade agreement that says United States banking regulators can’t protect our financial system by engaging in prudent regulation. To the contrary, nations have always been allowed to restrict what their banks can regard as safe assets, and thus effectively to limit their holdings of foreign assets.

Think of it this way. Would we want United States banking regulators to be prohibited from distinguishing between United States debt and that of Greece, Ireland, Spain or Italy?

In practice, this distinction among countries already occurs. For example, the Basel II equity capital requirements allow every country to treat the debt of other governments with some caution (although, without doubt, more caution is needed than was actually used in the past, or even than is encouraged under the new Basel III agreement).

Some Canadian officials, for example, have said that Canadian government debt should receive equal treatment with United States government debt. This is a dangerous proposal. Canada has ridden the recent commodity price boom and, to many observers, its real estate looks pricey. Do Canadian banks have enough loss-absorbing capital to weather whatever storms lie ahead – if China slows down or energy prices fall for some other reason? They had trouble in the 1990s, when commodity prices fell sharply. Why should American regulators allow our banks to take on a huge amount of Canadian risk?

Markets love a country until five seconds before they hate it. Surely we should have learned that by now, including from the European crisis.

We should continue to regard euro-zone debt with great suspicion. The euro-zone sovereign debt crisis may be over, and Greece’s bond rating was upgraded sharply by Standard Poor’s this week. On the other hand, S.P. and other ratings agencies have been wrong – and to a spectacular degree – in the not-too-distant past, including being overly optimistic about European sovereign debt and residential mortgages in the United States.

The Volcker Rule, and its international counterparts, like “ring fencing,” are forms of re-regulation, to be sure. Based on harsh recent experiences, countries are backing away from letting their banks and other people’s banks run unfettered around the world, taking on whatever risks they like and getting themselves into complicated legal and financial difficulties.

We need to reduce excessive and irresponsible risk taking throughout our financial system. The Volcker Rule is a significant step in the right direction. It is time for the regulators to finish the job.

Article source: http://economix.blogs.nytimes.com/2012/12/20/last-ditch-attempt-to-derail-volcker-rule/?partner=rss&emc=rss

Today’s Economist: Casey B. Mulligan: A Tale of Two Welfare States

DESCRIPTION

Casey B. Mulligan is an economics professor at the University of Chicago. He is the author of “The Redistribution Recession: How Labor Market Distortions Contracted the Economy.”

In “A Tale of Two Cities,” Dickens wrote, “It was the age of wisdom, it was the age of foolishness.” The governments of the United States and Britain are embarking on different approaches to helping their poor and unemployed, and one of them may regret its policy decisions.

Today’s Economist

Perspectives from expert contributors.

As recently as 2010, Britain had a complex system of antipoverty programs ranging, including housing benefits, job seekers’ allowances and mortgage-interest assistance. With so many benefits available, many people found they could make almost as much from the combined programs as they could from working, even while any one of the benefits might not have been all that significant by itself. As Britain’s Department for Work and Pensions described, beneficiaries remained “trapped on benefits for many years as a result.”

Beginning next month, Britain will strive to put its welfare system on a different path by unifying many programs under a single “universal credit” system, what the department describes as an “integrated working-age credit that will provide a basic allowance with additional elements for children, disability, housing and caring.” The department forecasts that its “universal credit will improve financial work incentives by ensuring that support is reduced at a consistent and managed rate as people return to work and increase their working hours and earnings.”

In the United States, the welfare system includes dozens of federal programs, enumerated by Robert Rector of the Heritage Foundation as those “providing cash, food, housing, medical care, social services, training and targeted education aid to poor and low-income Americans.” Beginning in 2014, more programs will be added and expanded by the Patient Protection and Affordable Care Act: new health-insurance premium-support programs, new cost-sharing subsidies for out-of-pocket health expenditures, financial hardship relief from the new individual mandate penalties, new subsidies for small businesses employing low-income people and expansion of Medicaid.

The Congressional Budget Office estimates that the Affordable Care Act’s means-tested subsidies and cost-sharing will implicitly add more than 20 percentage points to marginal tax rates on incomes below 400 percent (see Page 27 of the C.B.O. report) of the poverty line (a majority of families fit in this category) by phasing out the assistance as family incomes increase, although a number of families will not receive the subsidies because they already get health insurance from their employer.

These marginal tax-rate additions are on top of the marginal tax rates already in place because of personal income taxes, payroll taxes, unemployment insurance, food stamps and other taxes and means-tested government programs. In 2014, some Americans will be able to make almost as much from combined benefits as they would by working, and sometimes more.

In summary, the United States intends to move in the direction of more assistance programs and higher marginal tax rates, while Britain intends to move in the direction of fewer programs and lower marginal tax rates.

Either country, or both, may ultimately fail to fully carry out the new programs by granting waivers and exceptions, refusing to administer them or by rewriting its new laws. But if both do follow through, perhaps future empirical economic research comparing the United States and Britain will reveal which country is living an age of wisdom and which one in an age of foolishness.

Article source: http://economix.blogs.nytimes.com/2012/12/19/a-tale-of-two-welfare-states/?partner=rss&emc=rss