It’s rare, but it happens. A very small fraction (about 4 in 10,000) of peer-reviewed manuscripts are retracted – papers that are withdrawn from their original publication. Some are retracted because of honest errors – an error in a modeling equation, a mistake in patient data entry – and some are not, for instance intentional manipulation of data.
When a manuscript is retracted, the publisher removes the paper from the website (presumably there are still print copies in existence, if the journal offers a print format). Until recently, there was no systematic way to find retracted papers or comb the data on retractions.
Now, there is: the Retraction Watch Database, which contains information on over 18,000 retracted manuscripts, including the reasons for retraction.
The folks at Retraction Watch teamed up with colleagues at Science to analyze the retraction data and they found some interesting trends:
- Relatively few authors (about 500) are responsible for a disproportionate number of retractions.
- The majority of retractions have involved scientific fraud or other kinds of misconduct.
- The rate of retraction due to plagiarism looks to be stabilizing and possibly declining over the last 7 years.
- Retraction due to fake peer-review has increased steadily, and is the reason for about 20% of all retractions (as of 2015 data).
Before you submit your next manuscript for peer-review, double check the Retraction Watch database to ensure you’re not unknowingly citing a retracted paper.
Whether you are a veteran researcher or just beginning your academic career, you are probably familiar with the concept of peer review. In an ideal world, peer reviewers would politely request changes and suggest changes that would significantly improve your publications. In reality, peer review can be rude and unproductive. Here are some suggestions to improve both sides of the peer review conversation.
Continue reading Peer review: who, why, and how
Recent developments in scientific publishing have many folks scrutinizing open-access journals a bit more closely. A journalist with Science concocted a fake manuscript that, in his words, was a “credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable.”
[Update 11/12/13 (wow, what a great date!): Here’s a post-sting interview with the journalist, John Bohannon.]
In total, he submitted minor variations on the same manuscript to 304 open-access journals, of which 154 accepted the paper for publication. (None of the manuscripts was published. After acceptance, Bohannon would admit to uncovering a “serious flaw that invalidates the conclusions” and withdraw the paper.) Of the 154 journals that accepted the flawed manuscript, it appeared that 82 performed no peer-review before accepting.
While these results may seem disheartening for open-access publishing, it’s worth noting that no comparable investigation was performed on “traditional” journals that operate by subscription (see rebuttals on the Scholarly Kitchen, and the Guardian websites). So, there’s no way to say that open-access journals are different or worse than traditional journals in terms of their peer-review process. (Would be interesting to see the results of that study.)
Professional organizations exist, such as Committee on Publication Ethics (COPE), that provide guidelines for peer-reviewers and publishers to follow. COPE admits to having some of the journals uncovered by the sting on its list of vetted publications and vows to reexamine its approval process. A list of possible predatory publishers and journals is maintained by a librarian at the University of Colorado – Denver. From the results of this open-access sting, it appears that this “predatory” list is fairly good at spotting questionable practices, though some journals listed as predatory correctly rejected the flawed paper.
Next time you’re sending off a manuscript for review, pay close attention to the journal you or your colleague has selected. It may be worth gathering a little more background information about the practices and people behind the journal. Do you recognize folks on the editorial board? Have your colleagues published in this journal before? For the long-term health of your career and reputation, its better to get 1 or 2 high-quality publications on your CV instead of 3 times that amount at journals that may operate with questionable practices and intentions.
It can be difficult to make sense of all the contradicting medical advice presented in the media. Thankfully, there are some great resources that independently review medical studies and make recommendations that doctors and patients can understand.
Choosing Wisely collects recommendations from professional medical societies in the US and gives a list of 5 things to watch out for in each specialty. This website is a very succinct place to access information for tests and treatments. Here are just a few examples:
- Most of the time, adults don’t need antibiotics for a sinus infection.
- “Most people with lower-back pain feel better in about a month whether they get an [MRI, CT scan, X-ray] or not.”
- Scheduling a baby’s delivery early for the doctor’s or mother’s convenience is usually not a good idea.
The NNT (short for “Number Needed to Treat”) categorizes medical treatments on a stoplight paradigm: Green, Yellow, or Red. Green signifies treatments that show a clear benefit, Yellow represents an “undecided” category, and Red signifies that the benefits don’t outweigh the risks. There is an another category, Black, which signifies that the risks from the treatment are greater than the benefits. The reviewed therapies cover every major medical specialty. The Green light is given to steroid use for asthma attacks, nicotine replacement therapy to stop smoking, and aspirin for cardiovascular protection, just to name a few. Showing up on the Black list are PSA (prostate-specific antigen) tests for prostate cancer screening and vitamin D to prevent fractures in the general population.
The Independent Drug Information Service takes the promotional information out of drug marketing and, as above, looks at hard data to evaluate the drugs cost effectiveness. Information is provided for providers and patients and is grounded in reality. Where applicable, lifestyle changes are suggested before taking drugs (e.g., diet and exercise changes to reduce cholesterol). Several drugs are listed for one condition, with recommendations of which drugs to try first.
The Cochrane Collaboration may be the best known independent entity reviewing medical reports. They publish the Cochrane Reviews, which take a critical look at data from many previous studies. The reviews are updated regularly and many of the reviews are available in Spanish. Generally, information provided by the Cochrane Collaboration is geared more towards health-care providers.
It is refreshing to see these independent organizations keeping medical care honest and cost effective. And most of these organizations do so without multi-million dollar budgets to compete with the pharmaceutical and medical device companies. I’d be happy to hear about other organizations that are providing similar services.