Research Integrity

The Office of Research Integrity (US organization) has recently published its findings in the case of Bengu Sezen, a former graduate student at Columbia university. A few years ago the chemistry blog-o-sphere was buzzing with talk of a major research misconduct case surrounding Sezen. The findings released indicate that Sezen fabricated or plagiarised data that made it into 3 publications which were later retracted and her doctoral thesis.  Chem Bark and In The Pipeline have good coverage of current and past opinion on this matter.

I’ve seen it stated on both of those blogs that we have to ensure that this does not happen again, and perhaps to achieve that we all need to consider what research integrity is.

Everyone fudges results now and again, right?  Fudging results isn’t the same as lacking research integrity, right? Everyone does it from time to time and its all about scale, right?  Wrong, wrong and wrong.

I reviewed a manuscript last year  and recommended rejection for a variety of reasons but I recall one figure standing out.  It was supposed to be a plot showing a linear data trend and one data point was way off trend.  The authors had included it but the explanation as to why it was off trend was limited.  I found the paper a few months ago, published in a marginally less prestigious but more general journal than the original submission and went through comparing the version I reviewed with the version that made it into print.  It was an interesting exercise and highlighted in part the futility of peer review – very few of the recommendations that had been made by the reviewers had been taken up.  I found the plot in question and was surprised that it now showed a wonderfully linear data trend.  Great stuff I thought, they’ve clearly redone that experiment and eliminated the error in the method.  Nope! They’d just removed that particular data point from the graph.

Now, is that fudging the data? Is that an attempt to deliberately mislead or is it just putting a good spin on things?  Is it fine because it is one data point in a large body of work, and no one’s going to attempt to reproduce that work anyway?

I didn’t like this approach one bit.  Yes there are times when despite our best efforts we cannot get the optimum results – our trendlines are out, our analyses slightly off, and our yields that bit lower than we’d like.  But that’s just research, and its part of the challenge and beauty of a good non-theoretical experiment.  The authors of the paper could easily have repeated the measurement, plotted an average and given error bars – problem solved! Now consider another scenario – a synthetic research works on a 5 step synthesis and step 3 is tricky.  In fact, so tricky that it only really works 1 in 10 times.  Should they mention that in publication, or do they just write up the successful one with no mention of the 9 failed reactions?  Is that fudging the results?  People tend not to report failed experiments as a general rule which means that a wealth of useful reaction data is lost forever in dusty lab books.

We have to publish our research in some format, and we have to present our data in the best possible way to get it into the best possible journal – literally our careers depend on this.  That’s the system we currently have, for better or for worse.  It seems to encourage a bit of data fudging here or there (or omission of contradictory results), just to get ahead.  Thing is, it just isn’t acceptable to do so even on a small scale.  The Sezen case is large scale and the word might better be fraud than fudge, but consider the potential scope of small acts of data manipulation, and that dwarfs the Sezen case entirely.

Research integrity is about having the courage and patience to repeat results as required and with excellent lab technique, to store all data in a meaningful and well documented format so that anyone at any time might audit those records, and to present data in an honest manner.  It’s about good file systems and excellent data curation; it’s about publications where anomalous data points are interesting not abhorrent; it’s about a culture of honesty and openness among researchers.  Sadly the current system doesn’t promote these values as strongly as it ought to.

One thought on “Research Integrity

  1. I reckon it’s the fact that people just don’t want to spend 15 + years in labs trying to crack a problem like scientists of the past have done because they’re probably lazy. I was conducting an experiment a while ago and was told to remove a point from my graph by a PhD student, because it was ruining my otherwise linear data. I thought that was worrying because then the results wouldn’t be representative of what they actually were.

    I reckon that the FIRST thing that should be taught to undergrads is that it’s okay to have rubbish results from an experiment. To be curious. It’s all part of the learning process and they’ll know that that particular experiment obviously didn’t yield decent results. After all, that’s how science should be done. Undergrads should be taught what real scientists did to come up with their theories and fudging results should definitely have consequences which undergrads need to be informed about from day 1.

    On a side note, I think undergrads should be taught to really think and apply the key checmical techniques they learn in labs. Really get their creative and analytical juices flowing essentially. They should basically be given completely unfamiliar practical chemistry situations – something real scientists would encounter, be asked what skills they would apply to investigate certain topics. They could even be asked what they would do if something went wrong. Just a few things to really get them to use their brains.

Comments please!