The evidence for evidence-based medicine

Issue: BCMJ, Vol. 60, No. 1, January, February 2018, page(s) 5 Editorials
Brian Day, MB

Years ago, when the phrase “evidence-based medicine” was first being popularized, my friend and colleague Dr Bob Meek remarked, “What’s new about that?” He was correct; there was nothing new about the idea. In my experience, doctors have always tried to manage patients based on the best available evidence.

David Sackett, credited with popularizing evidence-based medicine defined it as the “conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” Who could argue with that?

One group that seeks to impose unproven top-down protocols on clinicians calls itself EvidenceNetwork.ca. They show how one can hijack a theoretically good concept and manipulate the model to create what they believe are valid guidelines pertaining to clinical treatments in which they have little or no expertise. That particular group lists nonclinical self-proclaimed experts who offer “expert guidance” in those areas.

I believe there are two kinds of experts: those who are biased and know it, and those who are biased and don’t know it. Few are capable of distinguishing between their own prejudiced beliefs and factual evidence. I subscribe to the view of theoretical physicist and quantum mechanics scientist, Richard Feynman, who wrote, “Science is the belief in the ignorance of experts.”

History tells us experts are often wrong but seldom in doubt. Experts at Decca Records rejected the Beatles in 1962, stating, “We don’t like their sound, and guitar music is on the way out.” In 1927, Warner Brothers asked, “Who the hell wants to hear actors talk?” In 1974, Margaret Thatcher pronounced, “It will be years—not in my time—before a woman will become prime minister.”

Some, naively, believe that their self-generated consensus opinions represent evidence and most, if not all, have inherent biases. The world literature cannot reassure us on the value of published expert evidence, even when peer reviewed.

There are a number of research scientists that know and understand the limitations in their field. I have been involved in randomized, prospective, double-blind studies, and I support their application where applicable.[1-2] I favor properly designed trials and objective studies when feasible, and I am lucky enough to have worked with some who exhibit scientific objectivity and understand the clinical role in analyzing research.[3]

The BCMJ is a peer-reviewed journal. We do our best to be objective, but it is vital that we recognize the deficiencies that exist in the process. In 1998, the editor of the British Medical Journal sent an article containing eight deliberate mistakes in design and analysis to over 200 peer reviewers. On average, the reviewers picked up less than two of the eight errors.

If written descriptions outlining Columbus’s experiences in the Americas and Darwin’s theory of evolution had undergone peer review by experts, both accounts would likely have been rejected as fantasy. Scientists at Amgen, an American drug company, could replicate only 6 of 53 published studies considered landmarks in cancer science, and John Ioannidis from Stanford has declared that most published research findings are probably false.

So what is the basis for our acceptance of evidence? One widely used approach is the concept of “null hypothesis.” Data are collected and calculations made to determine significance. A P value below 0.05 (1 in 20), implies statistical significance. This is a commonly used and abused test in experimental studies and peer review. A famous example of harm, in a case involving thousands, occurred in the early 2000s when many Vioxx users died as a result of excessive faith in the so-called 5% rule of statistical significance.

There is a strong case for decision making based on best practices and evidence-based decision making. But that evidence needs to be validated by clinical outcomes and not be dependent on self-appointed overseers who lack appropriate expertise and knowledge in terms of clinical outcomes.
—BD

References Top

1.    Taylor TV, Rimmer S, Day B, et al. Ascorbic acid supplementation in the treatment of pressure sores. Lancet 1974;2:(7880):544-546.
2.    Chirwa SS, MacLeod BA, Day B. Intraarticular bupivacaine (Marcaine) after arthroscopic meniscectomy: A randomized double-blind controlled study. Arthroscopy 1989;5:33-35.
3.    Chambers GK, Schulzer M, Sobolev B, Day B. Degenerative arthritis, arthroscopy and research. Arthroscopy 2002;18:686-687.

THIS ARTICLE

DISCUSSION

SHARE THIS ARTICLE