I’ve just been re-reading Freud’s An Outline of Psycho-Analysis1 which was his last attempt to give a concise exposition of his ideas and their clinical application. This paper was written in 1938, very near to the end of Freud’s life. I think this in itself is significant because by this time psychoanalysis was becoming established, if not totally accepted, in the discourse of psychiatry and allied ‘mental health’ professions.
Looking back over 75 years it’s sometimes hard to believe this, because today, at least in countries like the UK, psychoanalysis is by no means established. If anything it is looked upon either as a pre-scientific relic, or, if suitably modified, a useful ‘tool’ which, along with other useful ‘tools’, can be effective in some situations.
This would certainly have upset Freud; throughout his life he was very keen to establish psychoanalysis as a science. He often referred to it as a ‘psychology’, which implied he saw psychoanalysis as form of rational enquiry (logos = reason). This rather begs the question, though, regarding the relationship between science and reason. And, of course, it also raises the wider question of what we mean by ‘science’ in the first place.
But does it matter? What if the critics of psychoanalysis are right and psychoanalysis is simply an interesting relic of the late nineteenth and early twentieth century? After all, this doesn’t stop it being an effective form of treatment.
If only things were that simple! In today’s climate, words like ‘science’ and ‘scientific’ have very powerful ideological overtones. ‘Scientific’ has essentially come to mean legitimate. And this can have very real, material consequences for practitioners, because it can mean the difference between earning a living or starving. Perhaps this is a bit of an exaggeration, but the point that I’m trying to make is that this is not just an academic debate.
One of the reasons it matters is because of the inexorable rise of evidence-based practice, which attempts to justify itself by an appeal to science. There are two interesting points to notice here: firstly, the equation of ‘evidence’ with ‘science’; and secondly what ‘evidence’ actually means in this context.
Certainly there is a long-standing tradition which does equate evidence with science. Or, perhaps it might be more accurate to say that science is being equated with evidence. This is the tradition of empiricism and, somewhat paradoxically, I think it could also be linked to a particular interpretation of phenomenology.
There is undoubtedly a certain appeal to empiricism: after all, it’s a question of what you see is what you get – or rather, what you see it what there is. Most scientists would probably struggle with this type of (naive) empiricism, but it appears to be firmly lodged in the public consciousness. After all, isn’t a scientist someone who observes, even if this observation takes place under controlled conditions (often referred to as experiments)?
One of the problems here is that a great deal of science has very little to do with direct observation. Think about particle physics for example. Nobody can directly observe sub-atomic particles; rather they can observe their effects, which are represented as patterns on a computer screen. A psychoanalyst might equally as well argue that no-one can directly observe the unconscious; but the effects of the unconscious manifest themselves all around us.
Of course, a particle physicist might argue that a great deal of what they do can be modelled mathematically; in fact the experiments, such as those that take place in giant particle accelerators, are simply to test the model, to show that the equations are correct. But by this point we are already very far aware from any form of naive empiricism. We are now arguing that perhaps science is about the testing of theories. Karl Popper put is somewhat differently: science was about the falsification of hypotheses. This idea was taken up by many ‘new’ sciences, including psychology, which was keen to demonstrate its scientific credentials.
The problem here is that the actual process of falsification is a statistical one, not one based on direct observation. In fact, the purpose of observation is simply to gather data, which is effectively a process of abstraction. In other words, it’s not simply a case of observing something, for example how two people interact with one another in a room. Rather, it’s about deciding what counts as useful data in that interaction, which can then be subjected to statistical analysis.
And this brings me onto the question of what counts as ‘evidence’. In the world of evidence-based practice, ‘evidence’ is actually the end result of a complex statistical analysis that is carried out on the results of a series of randomised controlled trials (RCTs). This type of meta-analysis, to give it its proper statistical name, is about as far as you can get from ‘direct’ observation.
And these types of analyses just happen to be the basis of what organisations such as the National Institute for Health and Clinical Excellence (NICE)2 use in order to decide what constitutes efficacy in talking therapies. In other words, NICE use the results of meta-analyses of a series of randomised controlled trials on the efficacy or effectiveness of different types of talking therapy in order to decide which one has the best evidence base.
Bearing in mind that there have been far more meta-analyses of RCT data from studies of cognitive behavioural therapy (CBT) than any other form of talking therapy, it is hardly surprising that NICE has decided that CBT is the therapy of choice for a whole range of mental health problems. It’s not because other therapies don’t work (although this begs of the question of what we mean by ‘work’ in the first place). It’s simply that other therapies have not been so subjected to RCTs as CBT has.
Not surprisingly perhaps, this has led many practitioners in other traditions, including psychoanalysis, existential therapy, and person centred therapy, to try and find ways to make their own approaches equally as ‘evidence-based’ as CBT appears to be. And this means finding ways to devise randomised controlled trials that would work with these different approaches.
Many readers will probably be aware by this point that there are a myriad of philosophical, conceptual and practical difficulties with this definition of’ ‘evidence’. Apart from anything else, what has it to do with the lived experience of the therapeutic relationship? Furthermore, what has it to do with a lived experience which is different for each person? By definition, such experiences cannot be subjected to statistical analyses.
And this still begs another question: what is the relationship between evidence and science..?