Uwe E. Reinhardt is an economics professor at Princeton. He has some financial interests in the health care field.
My post last Friday explored the quality of care rendered Medicare beneficiaries under private Medicare Advantage plans and under the traditional, government-run Medicare program.
At the end of the post, I invited readers to apprise me of any study on the subject that my own search of the surprisingly thin literature on this issue might have missed.
Today’s Economist
Perspectives from expert contributors.
One reader was kind enough to alert me to such a paper, although he also had not come upon it by a general Internet search. It is a paper by Bernard Friedman, H. Joanna Jiang, Claudia A. Steiner and John Bott, titled “Likelihood of Hospital Readmission after First Discharge: Medicare Advantage vs. Fee-for-Service Patients” and published in the November 2012 issue of the health policy journal Inquiry.
The authors estimate the likelihood of a hospital readmission within 30 days of discharge from a 2006 database maintained by the Agency for Health Care Research and Quality. As I noted in my previous post, such readmissions have come to be known as one dimension of “quality,” with higher rates denoting lower quality.
Without adjusting their estimates for the age and health status of Medicare beneficiaries in the two options, the authors find a slightly lower likelihood of readmission under Medicare Advantage plans than under traditional Medicare. They note, however, that Medicare Advantage enrollees tend to be younger and less severely ill. After controlling for age and health status, enrollees in the Medicare Advantage plans are found to have “a substantially higher likelihood of readmission.”
The authors are well known and respected in the research community. Their approach is thoughtful and sophisticated and their findings persuasive. But they come to the opposite conclusion reached by the studies cited in my previous post.
So, to paraphrase Alexander Pope, who shall decide, when doctors (here, health services researchers) disagree, and soundest casuists doubt, like you and me?
The answer is that a single study of this sort is rarely conclusive, because it is extraordinarily difficult to tease the truth out of nonexperimental data — that is, data reported by operating entities for purposes other than the narrow purpose of a particular research study.
Medicare beneficiaries self-select into traditional Medicare or Medicare Advantage plans. They may differ systematically in characteristics that could indirectly affect readmission rates. Age and health status are two characteristics that can usually be measured and might be included in the available data set; but there may be others not included. Researchers try as best they can to make statistical adjustments for differences in the characteristics among self-selecting beneficiaries, as the authors of all of the studies cited in my previous post did. But the adequacy of these adjustments depends on the available data. Typically researchers acknowledge such limitation of their studies forthrightly in their reports.
A second point, perhaps not obvious to the uninitiated, is that a given data set can reveal different apparent “truths,” depending on the statistical methods used by researchers to tease out the truth. Dispassionate, objective researchers usually explore whether alternative statistical approaches would make a difference in their findings. On the other hand, researchers with an agenda can exploit this phenomenon to tease out of a data set the “truth” they may prefer. For that reason, scientific journals now go to great lengths to report researchers’ potential conflicts of interest, although the reported conflicts cannot and do not cover political ideology.
From which follows a third point, namely, that no one should ever take a single statistical study, or even a few, as a revelation of the truth. As I tell my students: “You can never trust a single statistical study in health policy. On the other hand, you can take the general thrust of many studies as in indication of what is likely to be true. And do check carefully who the authors are.”
All of which raises the overarching question: Given all these limitations of studies emerging from health services research, is it an effort worth the money spent on it?
First of all, the total sum the United States spends each year on health services research is trivial if compared to total annual health spending. If one plots it in a pie chart, health services research is not a visible slice but just a line. Probably no economic sector in the economy performs as little operations research as does health care – and it shows in the sector’s performance.
For the most part, health policy in this country is based on accepted folklore, forged from the legislator’s personal experience or information brought to him or her by acquaintances or lobbyists. One can view health services research of the sort cited in this and the previous post as a sincere attempt to limit the wide and often wild terrain over which the folklore on a particular policy issue would otherwise range.
It is illuminating to compare health services research with another effort to structure information for decision making: financial accounting. Modern economies spend a fortune on it. Throughout the year, the large accounting departments of enterprises assemble data for periodic reports on the financial condition and performance of the enterprise. External auditors are engaged to check the validity of these numbers. In the end, they usually attest that whatever is reported “fairly represents the financial condition of the XYZ Company, in accordance with generally accepted accounting principles.”
Yet, for all that effort and the money spent on it, does anyone believe that a company’s “statement of financial condition” (balance sheet) as of particular day of the year accurately summarizes its financial condition? For anyone who believes that, I would invite attention to the recent saga of Hewlett-Packard, not to mention the financial reports produced by investment banks in the past decade or so.
In fact, as someone thoroughly familiar with both financial accounting and health services research, I will offer this brash assertion: In terms of worrying over biases in their estimates and the overall sophistication and quality of their work, health services researchers tower over business accountants. Accountants cannot even state that their estimates are unbiased, because they do not even attempt to avoid bias in their estimates.
But that said, it also is surely true that the world is much better off with financial reporting, dubious as it sometimes can be. Would anyone want to rely merely on folklore to assess the performance of businesses?
Article source: http://economix.blogs.nytimes.com/2013/01/25/reader-response-medicare-options-and-quality-of-care/?partner=rss&emc=rss