The CQC’s new ‘insight model’

In May this year the CQC published its new strategy for 2016-211 and although it’s clear that this is being driven by primarily by financial pressures, there is one particular aspect that stood out for me.  This is the proposed introduction of their new ‘insight model’, which is the basis for what the CQC describes as an ‘intelligence-driven approach to regulation’:

We will build a new insight model that monitors quality. We will inspect all  new services, but then focus our follow-up inspections on areas where our insight suggests risk is greatest or quality is improving. We will update ratings where we find changes. By targeting our inspections, we will recognise   improvement, and identify and act on poor care. We will make more use of unannounced inspections and focus on building a shared understanding  of the local context and the quality of services between inspectors, providers and partners. When we register new services, we will look at risk levels and be flexible in our approach. (p.9)

A key aspect of this new model will be the CQC’s reliance on providers supplying a lot more quality information to the regulator, and in order to facilitate this process the CQC is going to move all its transactions (communications) with providers online.  At the same time the CQC will be encouraging providers to develop their own systems of quality assurance (QA) based on the CQC’s five key questions (domains), and to share this with the CQC ‘as part of an ongoing conversation about quality’ (p.11) .

This idea of developing an ‘ongoing  (and online) conversation about quality’ could, in my view, potentially present some serious challenges to providers, especially (though not necessarily) those smaller, non-corporate ones who lack the resources of larger providers to set up and manage these kinds of systems.

This is not to say that smaller providers do not carry out quality assurance on a regular basis; in my experience most do, and often very effectively.  However, a great deal of this tends to be ‘informal’, and often this links to the fact that the managers (and sometimes even the owners)  of such services are very ‘hands-on’ and therefore are aware at all times how the service is functioning and if there are any problems brewing.  The problem here is that much of this information is ‘in people’s heads’, and is not recorded anywhere.  This matters for two reasons: firstly, if there is a serious incident or problem with the service there is no audit trail; and, secondly, this ‘informal’ approach may work for a small services with a limited number of service users and staff, but once the service starts to expand it is simply no longer feasible.

But perhaps the key challenge is not only for providers to develop effective quality assurance systems that collect and analyse relevant information on a regular basis, but how to also  ‘align’ these systems with the CQC’s own methodology, i.e. with the five domains of ‘safe’, ‘effective’, ‘caring’, ‘responsive’ and ‘well-led’.    There are a number of off-the-shelf ‘tools’ for facilitating this process, but these tend not be customisable to the needs of the specific service.  Furthermore, a number of these ‘tools’ are paper based, rather than using a spreadsheet or database, which makes information collection, collation, analysis and sharing more difficult.

If the CQC is serious about wanting/encouraging providers to collect and share quality assurance information on a regular basis then it would be helpful if they could provide some more specific and detailed guidance on this, especially in terms of the format of such information, bearing in mind they would (presumably) want it to be uploaded to their server when requested.

Whilst we are waiting for such guidance I would suggest there are some practical steps that providers can take to facilitate the process.  Firstly, by ensuring they are clear about what information to collect .  This raises the question: what do managers need to know in order be satisfied they are running a safe, effective, responsive, caring and well-led service,  bearing in mind they are being constantly inundated with all types of information?

Under the ‘old’ CQC inspection framework, QA essentially sat under outcome 16, which looked at how the provider monitored and evaluated the quality of the service.  In the ‘new’ framework, there is a specific key line of enquiry (KLOE) regarding QA (in the ‘well-led’ domain) but in reality there is a QA dimension to all five domains.   However, I still think the ‘old’ outcome 16 is a useful starting point for managers and providers when thinking about the types of QA information to collect.

Typically, when I was working as a CQC inspector and I wanted to look at this outcome I would be particularly interested in stakeholder feedback, e.g. service user and staff surveys, minutes from service user, relatives and staff meetings, complaints information.  I would also ask to look at service audits, and, if they were available, the results of the service’s monthly or quarterly internal reviews.  Some services, especially the larger ‘corporate’ ones, would (and still do) regularly ‘inspect’ themselves, often using the CQC’s own methodology.

The key point about all this type of information is that it is either based on stakeholder feedback (e.g. service user and staff survey results), or it is trying construct a ‘bigger picture’ of the service, e.g. through regular ‘self-inspections’.   Even the audits I mentioned above are being used to help construct such a  ‘bigger picture’.

Another important question relates to how often QA information needs to be collected.  There is no single answer to this: stakeholder surveys may only be carried out once, or at the most twice, a year, whereas audits may be carried out monthly or even weekly.  Much of this depends on the resources available to, for example, conduct surveys, facilitate meetings, and so on.

This immediately raises another question: how are busy managers and administrators going to find the time to carry out regular audits, surveys, facilitate regular staff and service user meetings, etc?  Like most things relating to time, it’s really a question of priorities: how much does QA matter to the provider?  Judging by some of the services I’ve visited in the past, not a great deal.  However, if the CQC is going to be requesting regularly QA updates in the future, perhaps attitudes to quality assurance are going to have to change.

The third question relates to the format of such information.  I’m thinking here particularly about paper based versus digital.  In my experience a surprisingly large number of providers still use paper based systems to store QA information.  This makes it much more difficult to analyse, especially when it comes to quantitative data such as audit or survey results; and it also makes it much more difficult to share with others.  The irony is, virtually all providers already have all the tools to hand, in the form of spreadsheet, database and word processing software, that would allow them to collect, collate, analyse and share all the key QA information.  It’s really a question of managers and other relevant personnel taking the time to learn how to use these tools effectively.

Copyright Leslie Chapman and Therapeia Ltd 2016
  1. http://www.cqc.org.uk/sites/default/files/20160523_strategy_16-21_strategy_final_web_01.pdf []