What is “Evidence based” information?
And why start talking of it now?
OK, there’s a double puzzle. Up until fairly recently (say, up until the start of the 21st century) this was not a phrase that appeared in the literature or, indeed, conversation. Certainly one would be asked “But where’s your evidence?” at which point one would supply one’s data or be drummed out as a charlatan, but here there is a different emphasis. Like trophies on a wall.
An old, traditional herbal remedy such as Arnica for bruises came backed with centuries of satisfied users. It always worked. Even conservative Yorkshire cricketers proclaim its effectiveness. Similarly apply Greater Celandine sap to cure warts. Try it. It works – and no known side effects (but DON’T eat it!)
Now what of Homeopathy. A friend, a practitioner, has a similar attitude. “I do it because it works.” What further evidence does she need? Being objective I can want more evidence and certainly I am a bit torn. I cannot see HOW it can work and I know that a placebo would be working well in her hands – she cares, she’s dedicated and, yes, she believes in helping people. And patients go back to her. Perhaps it’s better for the ill to visit someone like that rather than a GP who probably doesn’t know them, has little time for them, makes them wait endlessly in a waiting room, prescribes drugs recommended by a computer print out and probably supplied on some bonus scheme from the travelling sales rep of a major pharmaceutical company. It has to be prescribed so’s the practice can attain its quota and show how proactive it is.
Formal “trials” of the homeopathy are rarely supportive. Maybe not everyone knows what happened to Professor Benveniste at the Pasteur Institute of Paris but they should. It is a salutary tale but started with his disbelief, as Director of the Institute, in the work of a colleague who had produced some data supportive of homeopathy. Being a fair man, Benveniste had the work repeated and, as it still worked, he drove the work further. Eventually a paper was published in Nature on a proposed “memory” in solutions diluted to effectively pure water. This “memory” was maybe the “active ingredient” in homeopathic medication.
The reaction of the orthodox establishment was exactly as the recent hysteria about Dr Wakefield’s questioning the safety of the MMR jab, given its apparent side effect of precipitating Autism in a (large) number of recipients. They shouted “Foul” and demanded a retraction then published a dismissal of the paper in Nature. And he lost his job. And his colleagues disowned him.
So that’s the establishment’s response to evidence. Analogous to Horatio Nelson: “What evidence? I see no evidence!” Except, of course, when the establishment produce the evidence. Then it is incontrovertible. You see it’s been “peer reviewed”. This means that the paper(s) have been read and assessed by someone who thinks in the same way as the authors and makes all the same assumptions. Of course they agree with the conclusions.
If a new chemical cure is derived there are a number of reasons to question the above review system.
- Were the tests in vivo or in vitro?
- What side reactions were caused?
- How long did testing continue?
- Who paid for the research?
- Who’s making money from it?
- Were valid controls assessed?
- Is there a genuine need for this treatment?
I could go on!
What do medics, such as the Guardian’s pet Doctor Ben Goldacre, mean when they talk of “evidence based medicine” ? Is he just like my homeopath friend – “See, it works so it’s OK to go on using it”. as well as, of course, adding the jeering “See, yours don’t work and I can show that with these statistical tabulations.”
What’s the difference? Certainly obvious is the quality of the treatment given the “bull in a china shop” nature of so many allopathic medicines. No gentle subtleness in these products – they’re often really strong. Your hair falls out, your gut flora die, you go dizzy or fall asleep etc. These chemicals mean business.
But what is the rationale? Are they indeed technicians fitting drugs or other treatments to the patient in front of them? “Evidence” as defined a in Wikipedia s the sum of recorded data accumulated by doctors and research and development teams and accessed via computer. For example, a particular treatment maybe deemed to have been correct for 57% of previous users, with mortality arising in only 17% of cases. Whereas this may not be very good odds we feel that if we do nothing mortality is a 30% possibility within six months. Thus we recommend the use of this method as the most plausible way forward.
An apparent assessment of the patient’s individual needs is in fact a comparison with a number of previous cases on a Global basis – or as wide as the doctor cares to cast his “net” – to go by past general experience rather than reference to the specific needs of the patient in front of him. It assumes a “norm” applicable to all patients – one size fits all – contrary to the concept of “individual need”. The “medical expert” is now masses of data stored online.
Are these reference cases credible? It puts in better context the use of so many case studies in the MMR saga – eg the huge numbers in the infamous “Finnish study”. But every time you need to know the context and timing of subsequent assessments. So the Finnish research did not do long term follow up – merely counting onset of autism in the first few days subsequent to a jab. Further this pooling of cases clears MMR by ignoring the fact that other childhood jabs also seem to precipitate autistic syndromes. ASD increase proportionate to “Total vaccine load”. They can state that rates of autism are the same in populations which did not receive the MMR because most of the comparator population received these other jabs. If they looked at the never vaccinated group they’d find no autism and their argument would fall down.
Drugs companies, bringing on new products to the market are not going to look for evidence to counter their claims of the drug’s effectiveness and can control which questions are asked and at what stage. It’s an issue of information management rather than clinical effectiveness.
Think of the new Gardasil jab. How have they assessed its potential so far? How have they assessed its toxicity.? What controls? But, of course, they “don’t need controls” . They will try to accumulate evidence over time to say that, for example, rates of cancer in the total population have fallen since the introduction of the jab. Ignoring data as to any behavioural changes or population slants in the interim period the bare fact would be regarded as proof that the vaccine was improving general health in the population.
And in the meantime it is used on the basis of in vitro biochemical trials, a very incomplete understanding of the nature of the immune defense system and the assurance that it is not toxic!
It seems to me that the phrase “evidence based medicine” is very misleading. It implies specific and accurate information to tailor drug use to a particular patient’s condition but is in fact based on averages of a wide range of individuals and data chosen to demonstrate the chemical’s effectiveness. More “selective evidence based medicine” it seems.
[Sent to Magda IP on 14 12 08 – updated 16 2 2013]