March 28, 2024

Economix Blog: A History of Oopsies in Economic Studies

CATHERINE RAMPELL

CATHERINE RAMPELL

Dollars to doughnuts.

As my colleague Annie Lowrey has been covering, a frenzy has erupted over errors in an influential paper by Carmen Reinhart and Kenneth Rogoff. Probably the most embarrassing mistake dealt with errors in a simple formula in an Excel spreadsheet.

This is hardly the first time that a big, splashy economics paper — one that appeared in an elite, peer-reviewed journal, no less — has had an embarrassing error. After all, everyone makes mistakes, including tenured economics professors. From what I’ve seen, and from my discussion with academics, it seems that the mistakes rarely result from deliberate fabrication. Typically the flubs seem to result from human error, with the blame often placed (though rarely on the record) on the poor research assistant who did the grunt work. Which sounds like a terrible cop-out, but is actually credible; economists believe in comparative advantage and so often leave tasks like data entry, coding or simple regressions to undergrads or grad students.

There is a middle ground between innocent error and wholesale fraud, of course, including interpreting an ambiguous data point or result in a way that is favorable to your thesis — something not unique to economics or even the academic world.

Here is some of the prominent research whose data was questioned in recent years:

  • Emily Oster wrote her dissertation, published in 2005, about how hepatitis B skewed sex ratios at birth and was therefore responsible for the “missing women” in Asia, contra explanations from development economists like Amartya Sen. Later she published a separate paper, using different data, that rebutted her original thesis. Professor Levitt praised her for publicly admitting the paper that made her famous was wrong.
  • One of Caroline Hoxby’s most cited papers, in 2000, argued that having more school choice — which in this case basically meant more school districts — improved the quality of schools. The work used a clever proxy for the historical number of school districts: the number of streams in an area, since streams are natural boundaries around which school districts have historically been formed.  Later another economist, Jesse Rothstein, wrote a paper saying he could not replicate her results, and blamed how the original paper categorized what counted as a stream for the discrepancy. A quotation from the paper: “Where Hoxby reports five larger streams in Fort Lauderdale, I counted 12, and a research assistant — working independently — counted 15.” The brouhaha over how to define a stream made it to the national press, and involved accusations of racism and sexism (Professor Hoxby is black and female, in a discipline that is still predominantly male and white).

And those are just a few prominent cases in top academic journals. How do questionable if not clearly erroneous findings make it past the gatekeepers, given the rigorous, onerous, ridiculously long peer-review process?

For the most part, economics journals do not ask anonymous peer reviewers, known as referees, to replicate results fully. The referees are there chiefly to weigh in on things like: Is the question being asked an interesting one, and is it being answered in the smartest way possible? Did the author use the right data, controls and statistical tools available? Is there other research related to this topic that the author should be considering? What additional robustness checks should the author do? Rarely are the referees fact-checking arithmetic and Excel spreadsheet formulas.

That said, in response to controversies over mistakes, coding disputes and academics under siege who are protective of their data, some top journals like The American Economic Review now require authors to submit data and code “sufficient to permit replication” when sending in a paper. Sometimes the underlying numbers and code must be made publicly available. But there are exceptions, as sometimes the data are proprietary or very sensitive. Researchers have to get special permission to use the highly coveted Social Security records data, for example, and have to agree not to share the numbers with unauthorized parties.

Article source: http://economix.blogs.nytimes.com/2013/04/17/a-history-of-oopsies-in-economic-studies/?partner=rss&emc=rss