Why Evidence Matters—And How We Can Access Evidence That Matters


Scientific evidence is slippery. One consequence of the evanescent nature of scientific evidence is that scientific fads abound, even in highly legitimate fields of medicine and psychology. But even in non-faddish interventions, we hear so frequently that the currently reigning therapeutic emperor has no clothes that we may as well accept that he’s a nudist. In part, this is due to the nature of evidence and the mechanics of scientific investigation. Nothing cannot be disproven. As we delve into a problem our understanding of that problem deepens, and this affects how we approach and treat common concerns like depression, hypertension, and the like. Is cognitive processing therapy the only valid treatment for PTSD, as some in the VA would like for us to believe? Is low dose aspirin for elders more harmful than good? Is estrogen replacement therapy in post-menopausal women more risky than it is worth? These treatments, once accepted as rote, have all been questioned, and with good reason.

Generally treatment fads are fairly benign but occasionally they are not. In psychiatry, it can be argued that the fields’ wholescale embrace of serotonergic antidepressants was a fad based more on clever drug marketing than on science. In some ways this was a relatively benign fad, because one antidepressant works just as about as well as another and because serotonergic antidepressants were generally safer than the medications that preceded them. But in other ways it was more damaging, because providers emphasized treatment with such drugs when non-pharmacological treatment worked just as well for many. Other psychiatric fads like rapid neuroleptization or the dark practice of prefrontal leucotomy (lobotomy) were more destructive and left trails of permanently broken patients in their wake. But medical fads are certainly not limited to psychiatry. In addition to estrogen replacement therapy or low-dose aspirin, examples of other medical fads are abundant, such as formerly routine tonsillectomies and adenectomies in children. Students of military medicine may recall another peculiar fad: For a long time it was mandatory for budding submariners to undergo appendectomies – until it was clear that the losses from surgical mishaps far outweighed the risk of a ruptured appendix while submerged.

We in psychology may think that we’re immune to fads, and if not completely immune, we have some profession-specific antibodies. After all, our treatments are noninvasive and fairly benign – what does it matter that a patient get CBT instead of CPT? Well, sometimes it matters a lot, as for those who underwent forced sterilization because a psychologist classified them as a “mental defective” on the basis of a culturally inapplicable or poorly administered IQ test, or someone who underwent court-ordered hormone injections because their responses to a Szondi test indicated “homosexual tendencies”. Fads such as encounter groups that forced disclosure of sensitive personal information led in some instances to severe decompensation. Most of us who have been in practice for a while remember the repressed memory craze that sometimes had very real legal consequences for those identified as malefactors in these “memories”. In an episode that strained my credulity to the utmost, I personally had to deal with a psychologist whose testimony regarding satanic ritual abuse (a variant of the repressed memory phenomenon) led to the imprisonment of innocent people.

But those examples are laughable, you say. Those things happened long ago, before we really understood what we were doing, and besides, not that many people were affected. Think again. The misapplication of psychological science is current and its consequences significant. The misuse of intelligence testing in death penalty cases is a very live issue in front of the US Supreme Court as I write this. More broadly, the misapplication of the psychology of emotional recognition underpins much of what is being touted as artificial intelligence (AI). In spite of emotional recognition being built into numerous platforms that may affect you directly, much of the science seems to be shaky indeed. Based on the research of the eminent social psychologist Paul Ekman, who claimed that all humans exhibit a basic set of facial expressions in response to six common emotions, this type of AI claims to help big business recognize emotional responses to things like advertisements. But Ekman himself was involved in a far more serious collaboration (from which he later distanced himself) with the Transportation Security Administration that attempted to use emotional recognition to detect suspicious or terrorist activity. The FBI has also used emotional recognition in attempts to recognize terrorists, according to the comprehensive study cited above published by the Association for Psychological Science that challenged the underlying accuracy of emotional recognition in AI. To no one’s surprise, emotional recognition algorithms may not accurately determine emotions of people from non-Western cultures. Other research has shown that facial recognition AI is more accurate for European-Americans than other ethnicities, particularly among people with darker skin tones possibly because light-skinned males are overly represented in the underlying databases.

In short, it is not wise to presume that psychology is immune to either societal or scientific fads, nor is it accurate to assume that our involvement in such fads is inconsequential. An emerging treatment that to me has all the markings of a fad is the use of ketamine. I’ve written about this before, so I won’t dwell on it now, but suffice to say that the evidence base for ketamine is flimsy at best. This doesn’t deny that ketamine may help a few people. It is also true that its use may represent the unexpected resurrection of a very ancient form of mental health treatment called ecstatic visionary shamanism, though I doubt that the physicians administering ketamine view it in these terms. It is also true that there might be some therapeutic benefit to just ‘getting high’ – which I discussed in last month’s column regarding a compound found in cannabis called cannabidiol, or CBD.

Some fads are harmless, but others that seem to be so actually may wreak more damage than we think. Last year, an internet celebrity named Kayne West made news by engaging in an unvalidated form of treatment called “scream therapy” (at least I think so – in the absurd realm that makes up most of the internet, it is increasingly difficult to distinguish reality from fantasy). Evidently Mr. West is too young to remember an earlier version, briefly popular in the early 1970s, called “primal screaming”, championed by a legitimate psychologist, Arthur Janov, but rapidly abandoned in the absence of a legitimizing evidence base. So what’s the fuss, we ask? Primal screaming if nothing else was good clean therapeutic fun, harmless albeit useless. But that is probably not true. After all, the goal of psychotherapy in most instances is to correct emotional incontinence, not foster it. As clinicians, one of our principal therapeutic endeavors is to apply insight or reasoning to modulate negative perceptions and emotions regarding self or others. Forced or inappropriate disclosure, as was practiced in other unvalidated forms of therapy, like the aforementioned encounter groups and later Critical Incident Stress Debriefing, can and did lead to emotional damage.

So evidence matters. Therefore, how we communicate evidence from the research side to the clinical side of psychology also matters. But what really matters here is that for a long time we haven’t been doing a very good job of communicating research to practitioners. Let’s face it. The average clinician doesn’t read much of what her or his research colleagues publish. Most articles in the psychological literature have an extremely limited readership. This isn’t because clinicians are lazy or unethical. It’s because the mechanisms by which we communicate scientific evidence don’t very well meet the needs of clinicians.

Randomized clinical trials are the current gold standard for psychological research. Indeed, the RCT, an invention of the mid-20th century, is an extraordinarily powerful heuristic, so there is ample reason why it hs become the prevailing investigative rubric of our time. RCTs provide statistical validation for most of the interventions we view as evidence-supported today, but they have become essentially the only recognized means of scientific communication in psychology. This does not serve the field well.

In mental health, RCTs mostly took the place of either uncontrolled trials or individual case studies. Whereas RCTs examine differences between highly controlled and tightly selected groups, case studies provided a mechanism of assessing individual differences in rich descriptive terms. In many ways, case studies more closely address the needs of practitioners than do RCTs. Clinicians are more concerned with differences between individuals rather than differences between groups. Their work depends on the accuracy of clinical observations, and clinicians do not have the luxury of controlling for comorbidities, pre-existing conditions, or multiple interventions. And while it is true that clinicians should be aware of the methods employed in an RCT, once a basic threshold of soundness has been passed, the nuances of statistical analyses are of little interest to the clinical reader. Yet we persist in focusing on method and results in our publications, and limit our discussions only to the hypotheses we set out to disprove. Stripped of applicability and context, RCTs lose much of their utility for clinicians.

The research community is not unaware of this. Since Paul Meehl famously eschewed case conferences many decades ago a number of investigators have challenged the notion that observational techniques are unreliable and consistently overstate results. I think it is time that we re-examined our attitudes towards them. We can, I believe, benefit from both the predictive power of the RCT and the descriptive power of observational inquiry. Readers of the Journal of Health Service Psychology will recognize our attempt to utilize both strategies in our attempt to provide clinicians with usable insight of direct applicability to practice. All articles in JHSP are research based, yet they present the research in context, and more importantly in a way that benefits from the observational insights of experts in a particular disorder or patient population. In January of 2020, JHSP will begin a new partnership with Springer publications, and the journal will be distributed to any library or institution that subscribes to Springer’s medical/clinical science package.

Obviously, this means of communicating scientific data is not completely novel, nor is it a panacea against fads or unsupported clinical interventions. But if others adopt the strategy of combining scientifically supported data with expert clinical observation, I believe we stand a greater chance of better meeting the needs of the provider in the field.

Copyright © 2019 National Register of Health Service Psychologists. All Rights Reserved.

1200 New York Ave NW, Ste 800

Washington DC 20005

p: 202.783.7663

f: 202.347.0550

Endorsed by the National Register