Research Citation Lifespan – II

In a previous post, I made the following point regarding the downside of arbitrarily placing a five year expiration date on research citations:

Depending how you look at it, The Five Year Rule either exacerbates or supports the “publish or parish” death march. If research work has a shelf life of five years before its relegated to the citation dust bin, the risk is that a researcher pressed for time will generate research that is less valuable, shallow, or perhaps even based on compromised data.

After this post, an article in The Chronicle of Higher Education caught my attention: Fraud Scandal Fuels Debate Over Practices of Social Psychology

The discovery that the Dutch researcher Diederik A. Stapel made up the data for dozens of research papers has shaken up the field of social psychology, fueling a discussion not just about outright fraud, but also about subtler ways of misusing research data. Such misuse can happen even unintentionally, as researchers try to make a splash with their peers—and a splash, maybe, with the news media, too.

The article goes on to say:

Mr. Stapel’s conduct certainly makes him an outlier, but there’s no doubt he was a talented mainstream player of one part of the academic-psychology game: The now-suspended professor at Tilburg University, in the Netherlands, served up a diet of snappy, contrarian results that reporters lapped up.

An outlier? How do they know that? Mr. Stapel happened to get caught and perhaps it is the extent to which he pushed the envelop that got him caught. It is unknown how prevalent this issue is as those who are doing this and haven’t been caught are…well..unknown. It is remarkable what Mr. Stapel got away with for so long. What about the unremarkable efforts in the grass that remain to be discovered? The effort to discover the less obvious transgressions would require a more dedicated effort at validation and vetting.

The risk I pointed out regarding citation expiration driving “publish or parish” thinking is illustrated by Eric-Jan Wagenmakers as cited in statistician Andrew Gelman’s blog:

The field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself.

How frustratingly dull it must be to report on something that’s successful in an unglamorous way.