Thursday, June 28, 2007

Psychiatry - No Evidence

http://www.clinicalpsychiatrynews.com/article/PIIS0270664407703478/fulltext

Clinical Psychiatry News

Evidence has to be clinically relevant.

DR. MICHELS is professor of medicine and psychiatry at Cornell University, Ithaca, N.Y.

Our current evidence base in psychiatry is not totally irrelevant to the clinical task of treatment planning and selection. But it comes dangerously close.

What has been offered as evidence is only data. Evidence is data that are useful in making decisions. Much, probably most, of the data available about treatment in psychiatry fails that test in the clinician's office.

The treatments that are studied are only a portion (and a strongly biased portion) of the available array of treatments. The treatments chosen for study by the sponsoring industries or by academics aren't the ones that are of greatest clinical importance. Consequently, we have evidence neither for nor against many of the most popular treatments in contemporary psychiatry. We have a huge number of data about treatments that have little public health value.

Most of the studies are sponsored by industries or commercial organizations that have an interest in the outcome of the study. Extensive research--evidence, if you want to call it that--has demonstrated that sponsored studies are consistently biased, and therefore not desirable as resources for making decisions.

Most of the studies in the literature use control groups that rarely are treated with the optimal alternative to the treatment being investigated. Instead, they use as the comparison treatment "as usual" in the community. They neither select the best practitioners in the community nor give them the best resources. Any treatment provided by enthusiastic researchers can be "treatment as usual."

Patients in most studies are highly selected and often atypical. Studies usually rule out patients with comorbid disorders. Do the patients who enter your offices arrive without comorbidity?

Frequently, the treatment duration studied is inadequate and the follow-up so short that it is almost irrelevant to the clinical problems. The patients we treat in our offices have intermittent and recurring disorders that persist for years. The treatments we use ought to be evaluated on the basis of their long-term impact on the course of the illness, not on their short-term effects on acute symptoms.

Standard outcome measures are much too narrow, usually focusing on the diminution of presenting symptoms rather than on global function. For example, we have reams of data showing the "efficacy" of drugs in reducing the positive symptoms of schizophrenia in acute episodes, but few data about the more devastating problems patients face--their long-term functioning, neurocognitive deficits, and secondary symptoms.

A recent analysis of depression treatment studies suggested that the quality of the therapeutic relationship between the treater and the patient, and the patient's pretreatment personality, were both more important in determining outcome than the specific treatment delivered. In a treatment study designed to investigate specific therapy A vs. specific therapy B, you're going to get something that's data at the end of the study, but not something that's evidence.

We need evidence relevant to clinical decision making. We don't have that evidence yet. In the current setting, we don't have much chance of getting it. We need science-based psychiatrists and evidence-based clinical judgment rather than rhetorical slogans about evidence-based treatments.

 

...

No comments: