“Hey DRR, we did a study and the conclusion is that you are incompetent.”
“While this may be true, can I enquire as to your study design?”
“We did a survey study and 67% of respondents agree that you aren’t fit to be the editor of a journal. We did all the statistics and the P value is < 0.001.”
“I’m curious how you decided who to survey?”
“Well, Bob and I don’t like you and my wife thinks you are okay, mostly because she doesn’t really know you.”
At the BCMJ we review all sorts of submissions for publication and we appreciate all the work that goes into the process of designing and carrying out a scientific study. That being said, one thing that drives us a little crazy (particularly the editor) is low-response survey studies. Surveys are handed out, collected, tabulated, and subjected to rigorous statistical analysis including P values, which all looks very impressive. The problem: many of these surveys have response rates of less than 20% from which no meaningful information can be obtained. The assumption that the greater than 80% of people who didn’t respond would have completed the survey the same way as the respondents is just that—an assumption. What if that 80% couldn’t be bothered to complete the survey because they really disliked something about it? Good survey studies are easy to spot. The target population is clearly defined and follow-up contact is done on numerous occasions in an attempt to increase the response rate. The authors also include a discussion in their paper of the limits of their survey study. Here at the BCMJ we don’t really look at a survey study unless the response rate is well over 50%.
Now, I don’t want to discourage prospective authors, only to give advice on how to increase the chance of publication. Handing out program evaluation surveys in a haphazard fashion without regard to random sampling techniques or total number of potential respondents is really a waste of everyone’s time and doesn’t lead to conclusions that can be acted upon.
Okay, I’ve said my piece and have ranted enough.
Above is the information needed to cite this article in your paper or presentation. The International Committee
of Medical Journal Editors (ICMJE) recommends the following citation style, which is the now nearly universally
accepted citation style for scientific papers:
Halpern SD, Ubel PA, Caplan AL, Marion DW, Palmer AM, Schiding JK, et al. Solid-organ transplantation in HIV-infected patients. N Engl J Med. 2002;347:284-7.
About the ICMJE and citation styles
The ICMJE is small group of editors of general medical journals who first met informally in Vancouver, British Columbia, in 1978 to establish guidelines for the format of manuscripts submitted to their journals. The group became known as the Vancouver Group. Its requirements for manuscripts, including formats for bibliographic references developed by the U.S. National Library of Medicine (NLM), were first published in 1979. The Vancouver Group expanded and evolved into the International Committee of Medical Journal Editors (ICMJE), which meets annually. The ICMJE created the Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals to help authors and editors create and distribute accurate, clear, easily accessible reports of biomedical studies.
An alternate version of ICMJE style is to additionally list the month an issue number, but since most journals use continuous pagination, the shorter form provides sufficient information to locate the reference. The NLM now lists all authors.
BCMJ standard citation style is a slight modification of the ICMJE/NLM style, as follows:
- Only the first three authors are listed, followed by "et al."
- There is no period after the journal name.
- Page numbers are not abbreviated.
For more information on the ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals, visit www.icmje.org