It was interesting to watch Sherlock the other night, i.e. the BBC’s ‘updating’ of Conan Doyle’s famous detective stories into the twenty first century with Benedict Cumberbatch starring as the sleuth in question. What particularly fascinated me, apart from the liberties that this ‘new’ version was taking with the plots (but that’s nothing new of course), was the portrayal of our twentieth first century Holmes as someone with incredible powers of observation and deduction. In a glance Holmes can tell a person’s life story (or at least what they were doing the night before). How does he do this? First he sees: he sees what other fail to see, the detail, the minutiae of a person’s appearance. However, mere observation is not enough. But, of course, Holmes does more: he sees and then he deduces. He deduces from a set of observations something about the person, the criminal. He knows what they are like, what they are likely to do next (apart from his nemesis Moriarty of course).
What has this got to do with psychoanalysis? What, indeed has this got to do with evidence and science? To answer the second question first. For many people, Sherlock Holmes is the archetypal scientist. Not only does he turn detective work into a science, but in a sense he is a model for the scientific method – or rather, what many people think the scientific method is. You observe and you draw conclusions. So Holmes, in all his incarnations over the years, is the archetypal empiricist.
The problem is, this is a myth, a fantasy. It has nothing to do with how science actually works. To start with there is a probably of semantics: can you deduce anything from an observation? Deduction is a logical process that reaches a conclusion based on propositions. But these are statements written in some form of language, either natural or mathematical. Of course, you can ‘translate’ empirical observations into logical propositions, which is precisely what the logical positivists aimed to do. But these propositions have very little to do with the original observation.
If anything, Holmes was an inductivist. This another way of saying that he made certain assumptions based on his observations, which has got nothing to do with deductive logic. As an inductivist Holmes would, at least mentally, be calculating the probability that a particular observation meant something particular about the subject in question. He may even have been carrying out a series of mental hypotheses to test a particular theory about the subject. Of course none of this makes a good detective story or gripping TV.
But if we stay with the idea that Holmes is an inductivist then he is not that far away from what many psychologists do when they are formulating theories. However, psychologists, in an attempt emulate what they think ‘real’ scientists, i.e. physicists, do, endeavour to be more formalised than Holmes. They try to carry out controlled experiments, i.e. through controlling a set of variables. They then record the results of such experiments and subject them to statistical analysis.
The problem for psychologists, however, is that it’s often very difficult to conduct controlled experiments with their human subjects – though there are many (in)famous examples of this being carried out over the years. Human beings have a nasty habit of being unpredictable and even exercising free will at the most awkward of moments. When it comes to trying to evaluate how effective a particular form of talking treatment is things get even more complicated. How can you conduct a controlled experiment of a very intimate relationship? You don’t. You do the next best thing and conduct a randomised controlled trial (RCT). This is a well established methodology in medicine, used widely, for example, in the testing of new drugs. You have two groups. One is administered the drug and the other, the control group, is administered a placebo. Of course, no in either group actually knows whether they are receiving the real drug or a placebo. All other variables are carefully controlled.
The results of such a quasi experiment are carefully monitored and subject to a range of statistical analyses. This way the person conducting the RCT can gain a pretty good (statistical) sense of whether the drug was effective or not. When it comes to talking treatments thing are more tricky, because it becomes more difficult to ‘operationalise’ the interventions and to control the ‘variables’ of the therapist’s and client’s behaviour. One way round this is to do a meta-analysis of a range of RCTs, which, as the name suggests, involves an even higher degree of statistical abstraction. We seem to have travelled a long way from the naive empiricism of Sherlock Holmes. In fact we’ve arrived at evidence-based practice and the process by which NICE (the National Institute for Health and Clinical Excellence – no one knows what happened to the ‘H’ in the acronym) decides whether a particular talking treatment is efficacious, i.e. whether it works or not.
I think it should be pretty clear by now that the kind of ‘evidence’ that it being constructed (and I use this word deliberately) through the process of RCTs and meta analyses is anything but a series of observations of the ‘outside’ world or even of a person’s behaviour. Instead it is a complex, statistical construction. A great deal of science is. The question is, what does such a complex statistical construction tell us about how individual human beings experience the therapeutic relationship, or how, indeed, they experience anything at all? Very little I would argue. And yet, such complex statistical constructions are being passed off as the ‘truth’ of the therapeutic encounter. Or rather, they are being used to justify a particular form of therapeutic intervention – one, perhaps not surprisingly, which lends itself particularly well to randomised controlled trials.