According to the CQC’s own figures, to date (April 2018) they have given ratings to 17,498 residential care services and 9,039 home care services. This means that a total of 26,537 adult social care services have been given ratings since the introduction of the ‘new’ inspection regime in 2014. Of these, a total of 560 were rated as ‘outstanding’, 20,041 were rated ‘good’, 5,111 were rated as ‘requires improvement’ and 857 were rated as ‘inadequate’. This means that although the vast majority of services (approximately 77.5%) were either ‘good’ or ‘outstanding’, a significant minority (approximately 22.5%) were either in serious trouble or were in need of urgent improvement. And in case those providers whose services were rated as ‘good’ are patting themselves on the back, it might be worth mentioning that ‘good’ should really be seen as ‘adequate’, i.e. as what you would expect from this kind of service, rather than being ‘good’ in the normal sense of the world.
The key question, though, is: what do these figures tell us about what’s going on in these services? More specifically, what do they tell us about what’s going wrong? Interpreting such data is never as straightforward as it might seem, but a good place to start would be to look at what the CQC report actually says about the service. As many readers may be aware, the CQC inspects all health and social care services by asking five questions, which equate to five key ‘domains’ of the service. The inspection team looks at whether each service is: safe, effective, caring, responsive, and well-led. Furthermore, each domain is rated individually (as ‘outstanding’, ‘good’, requires improvement’ or ‘inadequate’) before the ratings are aggregated into one, overall rating.
Let’s suppose that a report for a particular care home or home care service is rated as ‘inadequate’ in terms of being safe, effective and well-led (which, not surprisingly perhaps, would give it an overall rating of ‘inadequate’). How do we ‘deconstruct’ these ratings to get a better understanding of what they tell us about the culture of the service? To start with it would be helpful to see what the CQC says about each of these domains, and the specific issues that they highlight as being a problem. So, for the sake of argument, let’s assume that the report states that there were issues around lack of staff training, high staff turnover and a heavy reliance upon agency staff, poor medicine management, a lack of personalised care, and poor systems of monitoring and evaluating the service.
At this point it might be tempting to home in on one particular area and then ‘extrapolate’ it to explain problems in other areas. So in this case we might look at the staff turnover figures and the heavy reliance on agency staff. This might explain the lack of staff training (what provider is going to invest in training agency staff who may not be there in six weeks time?), and also poor medicine management (are agency staff going to be properly trained in this area?). It could also explain the lack of personalised care, which may be due to a lack of staff continuity and consequently a lack of opportunity for staff to get to know each client. On the other hand, this does not explain why there is high staff turnover in the first place. Furthermore, it does not explain the poor systems of monitoring and evaluation.
Another place to start might be with the poor systems of monitoring and evaluation themselves. How might this explain the other problems? One way to look at this is to think about the impact of such poor systems; essentially they would mean that the management of the service would not really know what was going on – and more critically, where things were going wrong.
This raises a more fundamental issue, which is often overlooked when thinking about quality assurance and the whole question of monitoring and evaluation: the ‘gap’ between what’s actually going on in a service and what management know about what’s actually going on. It’s also the problem that regulators such as the CQC face: they rely on evidence collected over a few days (at most) of inspection, but can this really tell them what’s happening on a day-to-day level in the service? There is no easy answer to this problem, and it’s one that managers, management consultants, regulators and auditors constantly struggle with..
Going back to our example: if the management of a service does not know that there are issues with staff training, staff turnover, medicines management and personalised care, where does the problem really lie? After all, it could be argued that these problems shouldn’t be arising in the first place, regardless of whether they are being picked up in audits, feedback questionnaires, supervision, etc. However, this seems to portray a rather idealised (and somewhat naive?) picture of what actually happens in residential and home care services. Things are continually ‘going wrong’ in such services, in the sense that, like any organisation, they are in a state of constant flux and change.
In fact, it could be argued that if things weren’t ‘going wrong’, at least to a certain extent, it would mean that the service wasn’t functioning at all, that it was essentially moribund. In my view, the purpose of management is to ‘manage’ this process of constant flux and change in way that (hopefully) continually drives up standards, or at the very least, prevents them from falling. And in order to do this, managers need to know what’s going on in all aspects of the service, and this comes back to the question of how best to (continually) monitor and evaluate the service.
Having carried out a good number of CQC inspections I would argue that a great deal of the problems that I encountered would not have occurred if there had been effective systems and processes of monitoring and evaluation in place. And, most crucially, if the information that these systems and processes had generated had actually been acted upon. So going back to the original question regarding how to interpret or deconstruct CQC ratings, and what they tell you about the service, my first question would be: how much of this is news to the management, to the provider?
- Or rather, one they ought to be struggling with! Often, I think, it is taken for granted that the data equals reality, rather than a particular, abstracted version of it [↩]