Purpose of this book
This short book is essentially a consolidation of my knowledge and experience gained over the last few years within the field of residential and home care services. It is primarily aimed at managers and providers of the such services, although I’m also hoping it might be of interest to other interested parties within this field, for example consultants, and perhaps even CQC inspectors themselves. The book’s main aim is to provide managers and providers with a set of ideas that will not only help them gain a better understanding of quality assurance (QA) and how this relates to the question of organisational culture, management and leadership, but also how the Care Quality Commission (CQC) has developed its own approach to QA, which it applies in the form of its monitoring and inspection programme. It also looks at how managers and providers can use this knowledge to ‘stay one step ahead’, in terms of being more proactive in their approach to QA and ensuring they have a service that is not only fit for purpose, but is also of the highest possible quality. To put it another way, and to use the CQC’s own terminology, the book aims to help managers and providers to be fully confident at all times that their services are safe, effective, caring, responsive and well-led.
This book is not a ‘how-to’ step-by-step manual, but rather an attempt to demystify and clarify a number of key concepts within the field of quality assurance, culture and management. In my experience, many providers and managers seem to treat QA as if it’s a matter of audits and surveys, the results of which are then quickly filed away somewhere and forgotten. Furthermore, many providers and managers view the CQC’s role with suspicion and resentment, and as far as they are concerned the main thing is to get the inspection out of the way as soon as possible and then get back to the core business of running a care home or home care service. If they get a positive report with positive ratings that’s all well and good, and if they get a poor report and poor ratings then the focus is on addressing the specific issues highlighted in the report and then quickly moving on. What is often noticeably absent in these situations is any clear understanding of why the CQC monitors and inspects the way it does, and how this relates to the broader quality assurance framework, and the wider questions of culture, management and leadership. There is also often little or no time devoted to examining and analysing the inspection reports in any real detail, beyond taking note of specific issues that need to be addressed before the next CQC inspection.
Summary of the chapters
Chapter 2 is an introduction to the concept of quality assurance and how this relates to the wider question of organisational culture. The core argument here is that QA is a way to ‘map-out’ quality across a service or part of a service, in a formalised, structured way. At the same time, though, it is also a way to ‘map-out’ the culture of a service, which is why this chapter also explores the concept of ‘culture’ in some detail and how it can be changed.
Chapter 3 looks at the role of the CQC in providing a framework of monitoring and inspection for registered services, and argues that the CQC is essentially engaged in ‘high-level quality assurance’. It explains how the CQC ‘breaks down’ quality into five domains and 24 key lines of enquiry (KLOEs), and uses this to ‘map-out’ quality across services. It also highlights the importance of finding evidence to back up any claims made about quality in a particular service.
Chapter 4 looks at CQC reports and how these need to be analysed in some detail, rather than simply looking at the ‘headline’ findings and ratings. Although there is often a temptation amongst providers and managers to focus on ‘fixing’ any problems highlighted in a report, I argue that it is important to ask more searching questions about such findings and what they say about the underlying culture of the service. I also look at the question of ratings, and what they mean; and linked to this, the question of challenging a report.
Chapter 5 explores the question of leadership and management, and why this is so important when it comes to building a quality service. It also looks at how the CQC inspects leadership and management; and the role of leadership and management in defining and sustaining a particular service culture.
Chapter 6 discusses the concept of self-assessment and how this can be a useful way for managers and providers to ‘stay one step ahead’, both in terms of preparing for a CQC inspection and, more importantly, being able to constantly monitor quality in their services. It recommends that providers and managers use the CQC’s own Provider Information Collection system, which is currently being rolled out, as the basis for self-assessment; but that they also use some form of self-assessment tool to gather evidence of quality, based on the CQC’s domains and KLOEs, in a systematic way.
I would like to conclude this chapter by saying a little bit about my own background in relation to quality assurance and organisational culture. I have been working in the health and social care sector for over 25 years, initially as a front-line mental health worker, then moving into the field of policy development and stakeholder engagement, and then working as a consultant in and around the NHS. I have also worked as a senior manager for a not-for-profit organisation that provided advocacy services across south-east England, and for two years worked as an inspection for the CQC. It was this latter experience that led me to return to the consultancy field, this time focusing on supporting providers and managers in the adult social care sector.
Throughout my work I have always had an interest in organisational culture, which I see as the ‘heart and soul’ of any organisation or service. And because I see culture as the product of a set of complex human relationships, both within the organisation and between it and the external environment, I take a particular interest in management, leadership and team dynamics.
2 Quality assurance, quality and culture
What QA is (and isn’t)
In the context of health and social care services, quality assurance (QA) is a formalised method of ‘mapping-out’ quality across the whole or part of the service. The purpose of such a ‘map’ or ‘picture’ is to enable providers, managers and any other interested parties (stakeholders) to have a clearer idea of how safe, effective, caring, responsive and well-led the service is.
At this point, alert readers will recognise that I have already started using the language of the CQC, and I will come back to this shortly. And the use of such adjectives as ‘safe’, ‘effective’, etc, highlights another important facet of QA: in organisational and management terms it is not enough simply to talk about a ‘quality service’ or an ‘excellent service’, unless it is first made clear what these terms are actually referring to. In other words, it needs to be made clear what facets or aspects of the service are being looked at in terms of quality before making such judgements. In the next chapter I will discuss how the CQC approaches this question, but for now we just need to be clear that in terms of quality assurance there has to be some way of ‘measuring’ quality, which inevitably means ‘breaking down’ quality into a number of different ‘metrics’. I use quotation marks deliberately here because in reality it is very difficult (if not impossible) to measure quality in any quantifiable sense, but at the same time we still have to find a way to differentiate different aspects of the service in order to start producing meaningful ‘quality maps’.
In my experience as a CQC inspector, and more latterly as a consultant working in the adult social care field, many managers and providers still seem to think of QA in terms of audits and surveys. Now, audits and surveys are very important management tools, but I would argue that they are not the basis of any robust QA system. Rather, within the context of quality assurance, the key question would be whether such audits and surveys are being carried out in the first place on a regular basis, whether they are relevant, and whether the information that they produce is being acted upon.
Another key thing to remember about QA is that there is no ‘right’ or ‘wrong’ way to do it, and no ‘perfect’ system or methodology to help you conduct QA exercises. Rather, the most important point is to ensure that (i) you have identified which aspect or aspects of the service you want to look at in terms of QA and (ii) you are clear what kind of evidence would demonstrate, both to you and any other interested party, that you do indeed have a service that is fit for purpose, i.e. is safe, effective, caring, responsive and well-led. The question of evidence raises another interesting point about quality assurance: I mentioned earlier that it is very difficult, if not impossible, to ‘measure’ quality, whilst at the same time you need to find a way to ‘map’ it out, to create a ‘picture’ of quality across the service. And this is where evidence comes in: although you are not measuring quality in the sense of gathering numerical data (although in principle this could form part of your evidence), you are looking for material that will help you build a picture of the service with regards to it being safe, effective, and so on.
It is important to realise that evidence comes in all shapes and sizes, and this includes oral testimonies, that is, what various stakeholders tell you (or CQC inspectors) about the service. I mention this is particular because there still seems to be a widely held view amongst managers and providers that such testimonies are somehow an ‘inferior’ type of evidence in comparison with written evidence, for example, care plans, policy documents, and so on. This is simply not true, and the CQC takes such oral testimonies very seriously. This also seems to link with another management myth, which is that what cannot be measured cannot be managed, and in the more extreme version of the myth, does not actually exist. Again, this is simply false; all managers know that in reality a great deal of what goes on in their services, especially when it comes to how people relate to one another, cannot be ‘measured’ in any meaningful way, but is an essential part of the service and its culture. And what’s more, it can be managed, if by ‘managed’ we mean such relationships can be guided, influenced, and to a certain extent, controlled.
Understanding organisational culture
Culture is the heart and soul of any organisation. At the same time, however, culture is something that is notoriously difficult to define, let alone ‘measure’. Although there is no one definitive definition of culture, I would suggest that a good working definition would be as follows:
Culture is the visible manifestation of the basic assumptions, values, relationships, thoughts and feelings that underpin any organisation or service.
Before looking at this definition in more detail it is also important to remember that culture is something that is enacted in the day-to-day practices of the service; in other words, it is something concrete and material, rather than something abstract and ethereal. I stress this because one of the main obstacles to changing culture is the commonly held idea that it is something ‘intangible’ and therefore difficult, if not impossible, to change.
Going back to the working definition, the word ‘visible’ implies that you can actually see a culture. Perhaps we need to qualify this by saying that it is for those with eyes to see! In other words, you need to know what you are looking for in the first place, know what signs to look out for. Signs of what, though? Let’s start with this question of basic assumptions; every organisation, every service, contains a number basic assumptions regarding what the organisation is in business for, how its members ought to behave and work together, where the service fits in within the wider health and social care environment, how managers ought to relate to their staff teams (and vice versa), just to name but a few. One of the important things to recognise about such basic assumptions is that they are often so basic that most, if not all, members of the organisation are not even consciously aware of them. This is why they can be so difficult to challenge; and in order to change a culture, such assumptions do have to be brought out into the open and called into question. It is also important to remember that such basic assumptions may not necessarily be shared across the whole service; in other words, some members of the service may have very different basic assumptions than others. For example, one of the provider’s basic assumptions may be that the core aim of the service is to make a profit through providing high-quality care, whereas some staff members may view the service as providing them with an income so they can put food on the table.
When it comes to the question of values, again these may not necessarily be shared across the service. Edgar Schein, who has written a great deal about organisational culture, talked about ‘espoused values’, and by this he meant the values that are adopted and supported (espoused) by the service. Espoused values are essentially the organisation’s stated values and rules of behaviour. In human terms this is about how members of a particular organisation or service behave towards one another and with those outside of the organisation. Such behaviour can tell you a great deal about what matters to the service. Values can also be expressed formally in mission and vision statements, and in key strategy documents. However, it is important to point out that Schein was actually referring to the values that were adopted and supported by the leadership and senior management of the organisation, whereas in fact others in the organisation may have very different values.
Of course, ultimately any organisation, any service, can only exist through the continual interaction of its members, both internally and with the outside world. Therefore, relationships are a key (if not the key) component of culture. At the same time, though, the way that people within the organisation relate to one another is heavily influenced by their underlying assumptions and values regarding the organisation, which can also influence the way they think and feel about the service and the people they work with, which includes their colleagues, managers, service users, relatives and other stakeholders. Conversely, the way people relate to one another within an organisation can in turn impact on the values, assumptions, thoughts and feelings. In other words, organisational culture is best viewed in terms of a dynamic, interactive process, rather than something fixed.
At this point it is also worth mentioning another aspect of culture emphasised by Schein, and this is what he called artefacts. These tend to be very visible indeed, and are usually very concrete and material as well. They include the actual buildings and architecture of the organisation, the way offices are laid out (for example, individual or open plan), but can also include the way people dress, key documents such as mission and vision statements, and even technological artefacts such as IT systems. They would also include company myths and rituals, especially if these are made visible, for example in conversations between staff members. However, precisely because such artefacts are visible, they need to be seen, in my view at least, as an expression of the underlying assumptions, values, relationships, thoughts and feelings; although at the same time, of course, working within a particular kind of physical environment can in itself impact deeply of how one experiences and thinks about the organisation.
Changing a culture
But is it possible to change a culture? More to the point, is it necessary? The answer to the first question is a definite ‘yes’ – as long as you understand what it is that you are trying to change. The answer to the second question is a bit trickier because you need to understand why the culture needs to change in the first place. And this brings me to the central point about culture: if there are any significant problems with a service, especially with regards to quality, these are almost always a reflection of the culture of the organisation. In other words, if there are questions regarding how safe, effective, caring, responsive and well-led a service is, these are ultimately questions about the basic assumptions, values, relationships, thoughts and feelings of the people who make up the service and its culture.
When it comes to the question of how to change a culture, we need to go back to the idea that what is commonly referred to as ‘culture’ is the effect or manifestation of a complex, dynamic process made up of basic assumptions, values, relationships, thoughts and feelings. So in principle, if it is possible to change, or at least influence, one or more aspects of this process it should be possible to change the culture. To give a fairly simple example within the adult social care field: a high turnover of registered managers within a residential care service could well be indicative of more underlying problems within the service. Of course, by the very nature of the problem it is hard, or indeed impossible, to find out from previous managers why they left, but if the current manager is experiencing problems this may provide a clue. If it turns out that the current manager experiences their staff team as ‘hostile’ and uncooperative, then this would help confirm that there is something amiss within the culture of the service.
This example also illustrates another important point regarding culture: the existence of ‘sub-cultures’ within the ‘official’ culture of the organisation. Another way to look at this is to use the idea of ‘social worlds’; any organisation is made up of a set of different ‘worlds’, and often those within one ‘world’, for example, senior management, have no real appreciation or understanding of other ‘worlds’, for example the ‘world’ of staff or even the ‘worlds’ of service users and relatives. Going back to the current example, there may well be a ‘sub-culture’ or ‘social world’ comprising a group of staff members whose main aim is to make it impossible for the manager to do their job properly; in other words, to make the service (and themselves) unmanageable. This phenomenon is, unfortunately for those managers and providers on the receiving end, more common than you might expect.
In terms of how you might address these problems, it is clear than unless the underlying problem (in this case the ‘hostile’ staff team) is properly addressed, the service will simply continue to ‘lurch’ from one manager to another, which will continue to sap staff morale. In terms of how such issues can be addressed, the first thing would be to explore in some detail why there is such a ‘hostile’ staff team, or at least a ‘hostile’ component. This may be linked to the history of the service; for example that at some point in the past there may have been a serious issue with management – or no effective management at all. Or there may be other factors involved. The key point here is that the only way to get to the root of the problem is to spend some time exploring it, rather than trying to find ‘quick fix’ solutions. And one way to begin such an exploration is to give the staff team a chance to tell their side of the story.
Culture and quality assurance
But where does quality assurance fit into all this? I argued earlier that the purpose of QA is to ‘map-out’ quality across the service or part of the service. But at the same time you are also ‘mapping-out’ the culture of the service, in the sense that the evidence that you use develop your ‘map’, your ‘picture’, of the service is also telling a story about its culture. I will argue in the next chapter that it is in fact possible to ‘read’ the culture of a service from its CQC report, but for the time being it will suffice to say that the evidence gathered for QA purposes is also a reflection of the basic assumptions, values, relationships, thoughts and feelings that make up the culture of the service. In other words, such evidence tells you something (a great deal in many cases) about the culture.
However, there is also another important link between culture and QA: the fact that the process of quality assurance can itself have an impact on the culture of the service. What I mean by this is that the process of ‘mapping-out’ quality across a particular service can actually change the culture of that service, at least to a certain degree. This is especially true if the introduction of such a QA process is something new, rather than being an established part of the management of the service. The very fact that a manager is taking the time to look for evidence for quality across their service (and hopefully taking the time to analyse and act upon such evidence as well) can change the ‘mindset’ of the manager, and even the ‘mindsets’ of their staff team. The hard reality is that many managers do not make such ‘mapping-out’ exercises part of their management programme, or at least not in any robust, meaningful way. This is partly down to time constraints, but also because QA is simply not seen as a priority. And one of the reasons for this is that many managers, I would argue, simply do not fully appreciate what quality assurance actually is and why it matters so much. Hopefully the reason you are reading this book is because you are not one of them!
3 The CQC’s approach to quality assurance
The CQC and QA
It may not be immediately obvious but all the CQC is doing when it monitors and inspects services is carrying out what is essentially high-level quality assurance. In other words, when the CQC carries out an inspection it is simply mapping-out quality across the whole service, just as any other manager or external agency would do if they were engaged in a QA process. I say it may not be immediately obvious because in my experience many providers and managers still view the CQC with suspicion, and feel that it is trying to ‘catch them out’ in some way. To be fair, I do not think the CQC is particularly helpful in dispelling such an attitude, and in my view is prone to focus a bit too much on the regulatory and more ‘punitive’ aspects of its work. However, before they can make judgements, apply enforcement orders, give ratings, and so on, they first have to evaluate the service, and this is what I mean by ‘high-level quality assurance’.
Just to complicate things even further, though, the CQC specifically highlights the issue of quality assurance within its ‘Well-led’ domain. The key line of enquiry W4.2 asks the question:
How effective are quality assurance, information and clinical governance systems in supporting and evaluating learning from current performance? How are they used to drive continuous improvement and manage future performance?
This could be interpreted as suggesting that QA is something that has to be addressed only within this particular domain; and in my experience some providers and managers appear to read it in this way. However, the way I read it is that the CQC is simply reminding providers and managers that QA is a core management responsibility, rather than being simply an ‘add-on’ to the service.
The CQC’s quality assurance process
Given that the CQC is engaged in some form of quality assurance process, the next question is: how do they go about this? The first (and perhaps most important) thing to point out here is that there is a lot more to the CQC’s work than inspections. Although inspections are perhaps the most visible part of the CQC’s role from the provider’s perspective, they form only part of its much wider monitoring and regulatory function.
A critical part of the CQC’s monitoring function is its Insight Model, which according to the CQC’s website page:
- incorporates data indicators that align to our key lines of enquiry for that sector
- brings together information from people who use services, knowledge from our inspections and data from our partners
- indicates where the risk to the quality of care provided is greatest
- monitors change over time for each of the measures
- points to services where the quality may be improving
Monitoring is also about maintaining relationships with providers, and receiving regular feedback from various stakeholders, including staff, service users, relatives, and visiting health professionals. There are also plans to develop the existing online Provider Information Return (PIR) system into a form of real-time service monitoring (called Provider Information Collection or PIC), although at the time of writing (October 2018) this has yet to be fully rolled-out. The key point about all this is that the CQC seems to be moving towards a system of continuous, real-time quality assurance, with the actual inspections being only a (albeit important) part of this process.
Having said this, the inspections are still a key part of the work of the CQC, and as I mentioned earlier, are probably the most visible aspect as far as providers, managers and staff are concerned. As you probably already know, there are two types of inspection: comprehensive (also known as scheduled), and focused. Comprehensive inspections are the ones most providers and managers are familiar with, which involve an inspection team visiting the service over a period of one or more days and taking an in-depth view across the whole service using the five domains (‘Safe’, ‘Effective’, ‘Caring’, ‘Responsive’ and ‘Well-led’). These are usually unannounced although this can be waived in certain situations, for example with home care services with only a small office and small staff team, or even a small residential service.
As I mentioned earlier, although it is not really possible to ‘measure’ quality in any quantifiable way, it is necessary to be able to find a way to be able to ‘break down’ elements of the service to allow quality to be ‘mapped-out’, which is the essence of any decent quality assurance system. As you probably already know, the CQC uses a system of ‘domains’ and ‘key lines of enquiry’ (KLOEs) to facilitate this ‘mapping-out’ process. Just to remind you, the five domains are: ‘Safe’, ‘Effective’, ‘Caring’, ‘Responsive’, and ‘Well-led’. These can also be thought of as five questions: is the service safe, effective, caring, responsive and well-led? And then there are the 24 KLOEs, which effectively ‘break down’ the domains into more manageable elements. At this point I should point out that it often appears that there are many more ‘sub-KLOEs’ (122 in total), for example S1.1, S1.2, and so on. However, technically speaking these are prompts to help the inspection team decide what to look for in terms of evidence for each KLOE. A complete list of the KLOEs and prompts for adult social care services can be found here.
Clearly, having a system of domains, KLOEs and prompts is only helpful if it can facilitate the collection of evidence; evidence that shows the CQC, the service manager, or any other interested party, that the service is up to scratch and fit for purpose. As I indicated in the previous chapter, evidence comes in all shapes and sizes, and there is no ‘right’ or ‘wrong’ evidence. Having said that, though, some evidence is more powerful than others; and of course it is no use producing evidence which has nothing to do with the particular aspect or domain of the service you are interested in. So, for example, having all your care plans, risk assessments and reviews up to date is all well and good (and essential), but is not evidence that you operate a caring, or even (necessarily) a responsive, service.
When it comes to the strength of the evidence, one way to obtain ‘strong’ evidence is to use corroboration. What this means in practice is to ensure that for each KLOE you look for several different types of evidence, and this is what the CQC inspection will endeavour to do if at all possible. For example, KLOE S1 asks the question: How do systems, processes and practices safeguard people from abuse? One useful piece of evidence for this would be to look at how many staff have been on safeguarding training, and how up to date this training is. If you have a training matrix (definitely a good idea!) then you should be able locate this information very easily. However, it would also be useful to look at the service’s safeguarding policy and procedures, and to check that it is both up to date and comprehensive enough to ensure that all staff understand the concept of safeguarding and the steps they need to follow in order to implement it. But it would be even better if you could also find out how staff members actually approach the question of safeguarding in their everyday practice, and one way to do this would be to interview them in a group, or to raise this in a staff meeting. Certainly when I was an inspector I would always include safeguarding as a topic of conversation in the staff group interviews that I conducted; and this would include looking at a number of safeguarding scenarios.
Hopefully you can see from the above example that all this evidence is relevant to the KLOE in question, and taken together it is more robust than as separate elements. Of course, this is based on the assumption that all the evidence is ‘positive’; in this example this means that there is evidence that all staff have attended safeguarding training, and have kept this updated; that the safeguarding policy and procedures are comprehensive and up to date; and that staff are able to explain how they would deal with specific safeguarding issues.
But this still raises the question as to what counts as ‘good’ evidence (or any evidence at all for that matter). Helpfully, the CQC themselves have published a pretty comprehensive document detailing different sources of evidence, a version of which I have posted here.
4 CQC reports
(De)constructing the reports
A CQC report, and more especially its ratings, can make or break a service. It is a public document which can not only be used by prospective residents and their relatives to make a judgement in their search for a suitable care home or service, but is also used by banks when making lending decisions. And, of course, other interested parties, for example local authorities, read such reports very carefully. In view of this, it is somewhat surprising, at least in my experience, how ‘passive’ many providers and managers are when confronted with a not-so-good report; and equally worrying, perhaps, how complacent they can be when they receive a positive report. Although it is perfectly understandable that in the event of an ‘Inadequate’ or ‘Requires Improvement’ report, the focus is on addressing the issues highlighted in the report, it is still worth bearing in mind that such reports (and their ratings) do not tell the full story of a service, and that they are in fact the product of a complex process of evidence gathering, collation, analysis, and discussion. And it is also important to remember that reports can be challenged, which I will come back to in a moment.
Another important point to remember regarding CQC reports is that although, as I mentioned just now, they do not tell the full story of the service, they do in fact give a useful overview, a snapshot of the service as it stands at the time. What such reports do not tell you, however, is why the service is like it is at the moment in time, and perhaps even more importantly, why there are specific issues that have come to the attention of the inspection team. The only way to answer these kinds of questions is for the manager and/or provider to take a closer look at the service, using the report as a guide, or to go back to my favourite metaphor, as a map.
For some reason, though, this is often not how a manager or provider responds to a report, be it negative or positive. In the case of a negative report, as I indicated above, the response is usually to focus on addressing the issues highlighted in the report, rather than asking more searching questions about the service, its management, staff team and culture. Although understandable at one level, this ‘quick fix’ type of approach is only storing up problems for the future, as many providers have found to their cost when, following a negative report, the CQC returns six months later only to find that little or nothing has changed.
In the case of a more positive report, for example one with a ‘Good’ or ‘Outstanding’ overall rating, the real danger here is one of complacency. It is important to remember that as far as the CQC is concerned, ‘Good’ is the benchmark; this is how they would expect any service to be like. In fact, I think there is a strong case for arguing that ‘Good’ as it now stands should be replaced with something more along the lines of ‘Adequate’, so that there are in fact five levels of rating, rather than the existing four. One of the real issues with the ‘Good’ rating is that the CQC will not re-inspect such a service for another eighteen months or so – and a great deal can go wrong in eighteen months! Therefore, there is no room at all for complacency, which is another reason why quality assurance needs to be an ongoing process.
What I’m suggesting here instead is that you use CQC reports as a starting point for further exploration of your service, regardless of how positive or negative the report is. And one thing you might like to consider is to engage in an exercise of ‘reverse engineering’. By this I mean thinking about how the inspection team came to the conclusions that they detail in the report. And in order to help you with this process it is important to remember what I mentioned earlier regarding evidence. Ultimately the findings of the report are based on evidence, nothing else. The domains and the KLOEs help guide the inspection team in their search for evidence, but they cannot come to any conclusions without the evidence to back these up. So think about what kinds of evidence the CQC must have found (or in some cases, not found) that led them to their findings published in the report. Were you aware that such evidence existed? One of the comments you often find in a negative CQC report is that the manager was not aware of the issues highlighted by the inspection team. Again, this makes the strong case for ongoing and meaningful quality assurance; there should not be any unpleasant surprises in the report.
Another useful exercise is to try and ‘read’ the culture of the service from the report. And by this I do not simply mean looking at what the CQC says about this aspect of the service in their report. Just to give a simple example: if a report describes care plans that are not up to date, gaps in staff supervision and training, and residents who have to wait a long time for their meals, what does this tell us about the culture of the service? Of course, it could be telling a number of different ‘stories’, including staff shortages, incompetent management, a failure of quality assurance, and so on. The key point here, though, is that the first question any manager or provider should be asking when they read such a report is: ‘what does this say about my service?’ rather than: ‘how do I fix this?’, which as I suggested above, is often the first question most managers and providers ask when confronted with a negative CQC report.
Of course, ‘how do I fix this?’ is a very important question in itself, but you first need to reflect upon what it is precisely that you are trying to fix in the first place. Going back to the example above, if it becomes clear that most, or even all, of the problems identified here are due to bad management then this requires a very different ‘fix’ than if it’s a matter of chronic staff shortages, which may in turn be a reflection of stiff competition from other providers in the local area who are perhaps paying higher wages or have more attractive working conditions.
So at the very least CQC reports should act as prompts for managers and providers to ask some very searching questions about their service. And yet, as I indicated above, this rarely happens – or at least appears not to. Of course, there are a number of reasons for this reluctance to ask too many difficult questions about the service one is responsible for, which again may say a lot about the culture…
However, it might be helpful to start with a more obvious point: CQC reports are not particularly helpful when it comes to providing some kind of ‘deep analysis’ of the service they are describing. Or it might be more accurate to say that these reports do not aim at providing such an analysis in the first place, or at least not explicitly. One of the problems with the CQC’s inspection methodology (the five domains, the KLOEs, etc) is that whilst it does a fairly reasonable job in ‘mapping-out’ quality across the whole service, this tends to be only at a fairly superficial level.
On the other hand, and like any map, the ‘quality map’ that is produced by each CQC report invites further exploration, encourages one to ‘home in’ on a particular part of the quality ‘landscape’ to get a clearer idea of what might lie hidden. But as with any map, one needs to know how to ‘read’ it in the first place, to know what all those symbols (words, terms, phrases) really mean. It’s also important to understand how such ‘maps’ are constructed in the first place and for what purpose.
But if it’s that difficult, you might ask, why bother? After all, why not just ‘fix’ the problems highlighted in the report and wait for the next inspection? Although this is a perfectly rational position, and one adopted by many providers who have received negative CQC reports, it has at least two potentially fatal flaws. Firstly, and as I mentioned earlier, you might simply end up ‘fixing’ entirely the wrong things, which probably helps explain why not every ‘Inadequate’ or ‘Requires Improvement’ service receives a better rating when the CQC returns to do a re-inspection. Secondly, the specific issues highlighted in any particular report are often only the tip of the iceberg in terms of what they say about the underlying culture of the service. In other words, even if you do manage to ‘fix’ specific problems, this does not mean that you’ve ‘fixed’ the culture, and if so you are simply storing up further problems for the future.
The question of ratings
For many managers and providers, the CQC ratings for their service are what really count. This is totally understandable but I still think it is worth remembering that these ratings are essentially a ‘shorthand’ for judgments about the quality of the service. In other words, knowing that a service has, for example, a ‘Good’ rating overall means that interested parties, including prospective residents and lenders, can be confident that there are no serious issues with any part of the service. On the other hand, a ‘Requires Improvement’ rating tells them that there are problems with the service, and that they should at least be finding out a lot more about it before making any commitments.
But what do these ratings actually mean? According to the CQC guidance, the ratings are based on an assessment of the evidence gathered using the key lines of enquiry I discussed earlier. To quote directly from the CQC guidance:
Inspectors refer to the ratings characteristics for each key question and use their professional judgement to decide on the rating, drawing evidence from four sources of information:
- our ongoing relationship with the provider
- ongoing local feedback and concerns
- pre-inspection planning and evidence
- gathering evidence from the inspection visit
A key point to note here that the CQC is not just relying on evidence collected during the inspection, but also on other types of feedback, including the relationship they have with the provider. As the CQC moves towards a more interactive and real-time system of monitoring, based on the concept of Provider Information Collection mentioned earlier, I suspect this ‘non-inspection’ type of evidence will become increasingly important.
The guidance also points out that a service does not have to demonstrate every rating characteristic for it to be given a rating. For example, even if the inspection team find just one characteristic of ‘Inadequate’, and this has a significant impact on the quality of care, then the service will get a rating of ‘Inadequate’. As I pointed out earlier, the CQC takes ‘Good’ as their baseline, and then makes a judgement as to whether the evidence exceeds this standard, or falls below it. In the former case, we could then be looking at an ‘Outstanding’ rating, whereas in the latter case, we could be looking at a ‘Requires Improvement’ or even an ‘Inadequate’ rating.
Challenging a CQC report
I mentioned earlier that CQC reports can be challenged. There are a number of ways to do this and the first is relatively straightforward. Before the report is published a draft copy will be sent to the provider for factual accuracy checking. The provider then has ten working days to respond. The term ‘factual accuracy’ is often interpreted as meaning that reports cannot be challenged beyond correcting obvious inaccuracies, for example, stating there were 25 residents in the service when in fact there were 35. However, the CQC’s guidance makes it clear that this is an opportunity for service providers to inform them:
- where information is factually incorrect
- where our evidence in the report may be incomplete
The CQC goes on to explain that:
The factual accuracy process gives inspectors and providers the opportunity to ensure that they see and consider all relevant information that will form the basis of CQC’s judgements.
The process also enables providers and inspectors to consider all relevant information that contributes to our judgements.
In other words, you can say pretty well what you like and provide as much additional information as you think fit. Again, though, as with the report itself, all this additional information must be backed-up with evidence. The CQC provides a Factual Accuracy Check Form, which has recently been criticised because there is a limit of 975 characters for each point. However, the CQC makes it clear that if you need extra space you can simply move to a new line or add extra lines on the form.
Even if you are unsuccessful at this stage in getting the report amended, and perhaps even more importantly, getting the rating changed, you should always challenge at this stage if you think the report is an unfair representation of your service. The reason for this is that if you then decide to take this to a legal challenge, which some providers do, your chances of success will be extremely slim if you failed to make a challenge at the factual accuracy stage. The point here is that if you did not make an initial challenge, however unsuccessful, it will appear that you were satisfied with the draft report as it stood. With regards to mounting a legal challenge, there are an increasing number of law firms specialising in this field, and you should be able to locate one in your area with a quick Internet search or through your local support network.
Why leadership and management matter
It may be pointing out the obvious, but good management and leadership, coupled with robust systems of governance, are key to the overall performance and quality of any service. And this is borne out by the CQC’s latest State of Care report. To quote directly from the report:
Leadership and governance are key factors that underpin quality. Efective governance systems make it easier for senior leaders to monitor quality and risk, and may help to insulate services from external pressures or unexpected changes, such as a key member of staff moving on. A lack of monitoring and oversight can quickly lead to problems with care delivery. (p.63)
Furthermore, an analysis of the CQC’s own ratings data shows that in adult social care services there is a strong correlation between the ratings figures for ‘Well-led’ and the overall rating figures. In other words, services that have positive ratings for ‘Well-led’ tend to have positive ratings for the other four domains (‘Safe’, ‘Effective’, ‘Caring’ and ‘Responsive’). And, needless to say, the reverse is also true: those services with a negative rating for ‘Well-led’ tend to do badly across the other domains as well, although for some strange reason the correlation is weaker than it is for those which are rated positively.
At this point I think it is important to note that within the CQC’s quality assurance framework, ‘leadership’ effectively equates with management, bearing in mind that in principle there can be ‘leaders’ in any part of the service and within any part of the staff team. But the term ‘management’ should also be treated with caution here: this does not just refer to the registered manager, but also to others in a management role, including deputies and heads of care. However, in most cases the buck tends to stop with the registered manager; which is why, unfortunately for the individuals involved, when a service receives a negative report and rating, the first thing the provider tends to do is to replace the manager.
It may seem unfair to blame the registered manager when things go wrong in a service, but if it’s not the manager’s responsibility than whose is it? Of course, in reality things are never quite that simple, because as every manager will tell you, they are very much reliant on their staff teams to work with rather than against them, and to have the full support of the provider or their own line manager. Unfortunately, all too often this is not the case. There are many stories of uncooperative or even hostile staff teams who make their manager’s life unbearable, and of providers who can be very lukewarm in their support of their registered manager. On the other hand, of course, you might argue that a good manager should be able to deal with such problems, although it does help if their own line manager and/or provider is fully behind them.
Another key point about management, and the registered manager in particular, is that they are often the ‘visible’ part of the service when it comes to interacting with the ‘outside world’. In other words, the manager is often the first point of contact with the service for any prospective service users and their relatives, or any other interested parties, including, of course, the CQC itself. And it is also the case that management is, to a large extent, responsible for the culture of the service; I will say more about this shortly.
The CQC’s approach to leadership and management
As you are probably already aware, the CQC tends to look at leadership and management within the ‘Well-led’ domain of their inspection framework. Within that domain there are five key lines of enquiry, which look at: vision, strategy and culture; the governance framework; stakeholder engagement; organisational learning; and partnership working with other agencies. However, I think it is important to remember that leadership and management impact on all five domains, and as already indicated, problems in any domain can often be traced back to issues with leadership and management.
And this raises a more general point about how the CQC ‘maps out’ quality across the service. I think there is often a perception amongst providers and managers, especially when it comes to reading CQC reports, that issues identified within a particular domain (‘Safe’, ‘Effective’, etc) are somehow ‘contained’ within that domain. So, for example, if a report highlights issues with safeguarding within the ‘Safe’ section, it may be tempting to think that this is not an issue that impacts on any of the other domains, or indeed is impacted upon by the other domains. But clearly this cannot be the case. Sticking with the safeguarding example for a moment, if there are safeguarding issues, what does this say about the quality of care in the service? And if these issues are longstanding, what does it say about the responsiveness of both the manager and the staff team? Furthermore, why has this gone unnoticed by management?
As I hope you can see, problems cannot be ‘contained’ within a particular domain (or a particular section of the CQC report either). I highlight this issue because I think CQC reports often read as if such issues are, in some way, ‘compartmentalised’, which does not help managers and providers understand the real nature of the problems. So perhaps the best way to read any report is to assume that wherever the issues are ‘situated’ within the body of the report, they are still ultimately management and leadership issues.
Going back to the ‘Well-led’ domain and its five KLOEs, in many ways they do encapsulate the overall responsibilities and key functions of management. It is the responsibility of management to ensure that the service has a clear vision and credible strategy; it is the responsibility of management to promote a person-centred, open, inclusive and empowering culture; it is the responsibility of management to ensure the service learns from its mistakes/complaints and is striving to continually innovate and improve, and so on. With regards to organisational learning, that is, learning from mistakes and complaints, and striving to continually improve and innovate (W4), this is where quality assurance becomes a major consideration. To put it simply, the service (and particularly management) has to have systems and processes in place that allow them to receive continual feedback, to be able to continually monitor the service, and to be able to identify and address problems as they arise. I will come back to how this can be achieved in the next chapter, but for the moment I just want to remind you how central good management and leadership is to the quality of the service.
Leadership and culture
In Chapter 2 I explored in some detail the nature of organisational culture, how culture relates to quality assurance, and how it is possible to change a culture – but only if its nature is fully understood in the first place. Not surprisingly, perhaps, management has a central role in both defining the culture of a particular service, and in managing it (insofar that culture can be ‘managed’). Certainly managers, and in many cases the providers themselves, set the ‘tone’ of the service, and promote its core values. As I said in that chapter, such values can be formalised and expressed in core company documents, for example, mission and vision statements, and key strategy documents.
Of course, and as I pointed out earlier, one the main challenges for managers is that their values regarding the organisation may not be shared by those they manage. In fact, often staff teams see things very differently, and may even have a completely different view of what the service actually is to that of the management and the provider. Just to give a simple example: many staff members may experience the service in terms of their direct relationships with their service users and with their colleagues. At the same time they may also be focused on ensuring they earn enough money to keep themselves and their families. This is in stark contrast to the experience of the manager and the provider who have to have one eye at all times on, amongst other things, occupancy levels, the CQC, the local authority, and, in some cases, shareholders.
This is where the concept of ‘social worlds’, which I touched upon earlier, can be very useful. Rather than viewing the service as one, monolithic entity, with one, monolithic culture, it is better to think about it in terms of a number of different ‘worlds’. Not only are there the ‘worlds’ of management and staff, but also those of service users, relatives, visiting health and social care professionals, commissioners, the CQC, and so on. This recognition can be especially helpful when it comes to managing change within the service, which may often be necessary after a particularly damning CQC inspection. There are statistics which suggest that 70% of change management projects end in failure, and one reason for this is that those leading the changes, who are often senior managers, fail to recognise that not everyone in the organisation sees things the way they do, and in many cases have a vested interest in maintaining the status quo.
The key point about recognising these different ‘social worlds’ is that although they may be different, they also intersect at some critical points. One of these points of intersection is that, ultimately, everyone in the service has a vested interest in its survival, because otherwise there will be no service, and therefore no jobs. This may be stating the obvious but sometimes those individuals who appear intent on making life difficult need reminding they have as much to lose as anyone else. Another point of intersection is that most members of the service, and particularly relatives, management and staff, all want the best for their service users. So again, there should be a reminder that it is in everybody’s interest that the service does well and provides the best quality care to those who use it.
Going back to the KLOEs for ‘Well-led’, KLOE W1 makes specific reference to the idea of a “positive culture that is person-centred, open and inclusive.” W3 looks at how various stakeholders, including staff, are engaged and involved. This idea of inclusion and involvement is central to ensuring not only a healthy culture, but can also facilitate change when necessary. In other words, the more staff (and service users and relatives) are involved from the beginning in key decisions about the service, the easier it is to keep them on board as the service evolves.
6 Self-assessing your service
The question of self-assessment
The overall theme of this short book is ‘staying one step ahead’. This can be read in at least two ways. Firstly it is about staying one step ahead of the CQC itself, in the sense of having a clear idea of the CQC’s role so that you can be better prepared for CQC inspections, have a better understanding of how they monitor services, and how to interpret, and if necessary, challenge, CQC reports. However, ‘staying one step ahead’ can also be read in terms of ensuring that you are not caught out by any unpleasant surprises within your service; in other words, ensuring that you keep a close eye on all aspects of quality across your service, and are able to address any problems as and when they arise. And in many ways this second way of looking at the concept of ‘staying one step ahead’ is more important than the first, because if you already have a clear picture of how well your service is doing in terms of quality then, in theory at least, there should be no nasty surprises when the CQC arrive.
But how can you ensure that your service is up to scratch at all times? And, just as importantly, how can you ensure you have the evidence to prove it? One way to do this is to carry out regular self-assessments, which in some services are dubbed ‘mock inspections’. Personally, I prefer the term ‘self-assessment’ because the term ‘mock inspection’ tends to be associated with preparing for a CQC inspection, whereas self-assessment should, in my view at least, be an ongoing process. But whichever term you prefer, the aim is to ensure that you are continually monitoring your service, and, where necessary, addressing any problems as they arise.
With larger providers, who may have a number of services, a head office, and even a number of regional offices, the self-assessment process often takes the form of quarterly or even monthly ‘inspections’ by a regional manager, or a manager from another service. Some providers even have a dedicated quality assurance team who carry out this function. With smaller providers, perhaps with only one or two services, self-assessment often becomes the responsibility of the service manager and/or the provider themselves. However, the principle is still the same: there needs to be a comprehensive ‘mapping’ of the services with regards to its quality. And ideally, this ‘mapping’ exercise should be carried out on a regular basis.
Utilising the CQC’s own methodology
Of course, the question then arises: how can such a regular ‘quality mapping’ be best achieved? Some providers have their own, ready-made systems, with their own ‘metrics’, that is, their own way of ‘breaking down’ the service in order to collect evidence of quality. However, bearing in mind the CQC already has its own framework and methodology for gathering such evidence, it would seem to make perfect sense to start using this instead.
I mentioned earlier that the CQC is starting to move towards a system of ongoing, real-time monitoring, Provider Information Collection (PIC), which is a development of their existing Provider Information Return (PIR) system. Although at the time of writing this is still in its early stages, undoubtedly this is the way the CQC will gather service information in the future. Although I refer to this as a ‘real-time’ system, at the moment the CQC only requires providers to submit information at least annually. However, there is nothing to stop providers submitting new information or amending existing information on a continuous basis, and I would certainly encourage them to do so.
At this point it is important to point out that PIC is not a substitute for the CQC inspection process, with its domains and KLOEs. Rather, as with the Provider Information Return, it is a way of gathering key pieces of information about a particular service, which can then be used to help inform the inspection itself. In other words, information submitted by the provider through the PIC portal can provide the inspection team with an overview of the service at any particular point in time, but it is only through an inspection ‘on the ground’ that a true picture of the service can emerge.
Having said that, when it comes to carrying out self-assessments, PIC is still a good starting point because in order to provide the information required, the manager or provider will still need to be mindful at all times of the evidence that substantiates what they submit to the CQC. What I mean by this is that whatever information you supply to the CQC you may (and quite possibly will) be asked to provide evidence to back it up – either at the time of submission or at the next inspection. Just to give a simple example: one of the questions asked by the CQC on the PIC form is ‘How many people in total are currently employed by the service?’ Let’s say that your answer to this is 55. At the next inspection, one of the inspection team notes from your staffing records that only 40 staff are currently employed. How would you explain what is quite a significant discrepancy? What if you are a new manager and had somewhere along the line heard the number 55 quoted and used this in the PIC, but had never bothered to check this yourself? Needless to say, the inspection team would not be impressed if you gave this response!
What I’m suggesting is that once the PIC system is fully up and running, you update and check this every time you have some new information to add, or need to amend existing information. And while you are doing this you can also check you have up-to-date evidence that backs up the information that you supply to the CQC. This way, you can be confident that when the CQC do next inspect your service you have all the relevant and up-to-date evidence at your fingertips. You can view the PIC questions here.
Of course, gathering evidence can be very time consuming, especially when you are looking for at least three pieces relevant information for each key line of enquiry (72 in total) as I suggested earlier. Furthermore, unless you are an ex-CQC inspector yourself, or have a background in quality assurance, figuring out what to look for and how to interpret it can be quite difficult. However, there are ways to make this process easier and less time consuming, and one approach is to use a self-assessment tool that allows you to record all the evidence in a systematic fashion, using the CQC’s own methodology.
You can either design such a tool yourself using a spreadsheet in Excel or similar package, or you can purchase one from a number of companies who specialise in supporting providers with their CQC inspections and related quality assurance issues. There are also a number of consultancies who can come and conduct CQC ‘mock inspections’ on your behalf, which are essentially another form of self-assessment. Whichever option you choose, it is important to bear in mind the following:
- Although there are paper based self-assessment forms available you are far better off using an electronic version, and especially one that is based on a spreadsheet. The reason for this is that this makes it much easier to share the information with other interested parties, including, of course, the CQC itself. It also means that different people can input information at the same time if the tool is set up on a shared system.
- It is very helpful to record not only the details of the evidence itself, but also the relevant KLOE number; the source of the information, e.g. care plans, training records, speaking with residents, etc; the last time the information was updated (if known); the location of the information, e.g. in a particular filing cabinet or in a particular computer folder; and any additional information that might be relevant, e.g. what actions you will take to address gaps in training, etc.
- You, or anyone else acting on your behalf, should never attempt to ‘rate’ your service. This is because ratings are the product of a complex process of discussion amongst the inspection team in conjunction with the CQC’s documentation on ratings characteristics. Therefore you can have no idea if any ‘rating’ that you come up with reflects in any way what the inspection team might decide upon.
Please click here to see an example of a self-assessment tool set up as a downloadable spreadsheet.
Staying one step ahead
The main aim of this short book is to help managers and providers of residential and home care services to take more control over the whole quality assurance process, and especially within the context of CQC monitoring and inspection. However, I hope that one of the key messages to emerge from this book is that QA is a means to an end, and so, for that matter, is the CQC itself. Primarily, QA is about helping to create a culture of quality and a ‘quality mindset’ by ‘mapping-out’ quality across the service. CQC monitoring and inspections are simply one (albeit very important) aspect of this process. What matters far more is that you adopt a particular way of thinking about QA and find the time to put this way of thinking into practice on a continuous basis. Perhaps the best way to think about how the CQC relates to this process is to see it as providing an external check to help ensure that you keep on track with your own, internal quality assurance programme.
I have placed a great deal of emphasis on the importance of culture in relation to QA because in my experience this dimension of quality assurance, and management in general, is often misunderstood and downplayed. Although a great deal of lip-service is given to the importance of culture and especially when there is a high-profile scandal (for example, Winterbourne View), the sad reality is that in most cases very little appears to change. And one reason for this, I would argue, is that the term ‘culture’, when applied to organisations, is understood as something rather nebulous or as some form of ‘add-on’ to the organisation, rather than being the core of the service.
As I have pointed out in this book, it is perfectly possible to change the culture of a service, but in order to do this you first have to understood what it is that you are trying to change. For the purposes of this book I have defined culture in terms of a complex, dynamic relationship made up of basic assumptions, values, relationships, thoughts and feelings. Therefore, any attempts to change the culture of a service has to focus on this relationship and its elements. Depending on which aspects of this relationship you are particularly interested in, there are a number of different approaches and techniques for exploring and addressing any specific problems in these areas. The example I gave in Chapter 2 looked at the problem of having a ‘hostile’ staff team, and I suggested that one way to begin to address such a problem would be to spend some time listening to the staff team and finding out more where such ‘hostility’ originates from.
Although an understanding of organisational culture is fundamental when it comes to addressing issues of quality within a service, the existence of the Care Quality Commission is a reality that has to be taken into account by providers and managers. As I suggested above, one way to think about the role of the CQC is as an ‘external check’ to make sure you are staying on the right track with regards to providing a quality service. And although the CQC is most ‘visible’ in terms of its inspections, I pointed out that it has a much wider monitoring role, and there is currently a move to make this more of a ‘real-time’, ongoing process, with Provider Information Collection (PIC) forming an important element of this process.
Along with an understanding of culture, management and leadership are the key factors in determining whether an organisation succeeds or fails. And within the context of adult social care and the CQC’s monitoring framework, there is a direct correlation between leadership and management and the overall quality of the service. Furthermore, management plays a key role in defining and sustaining the particular culture of a service. Within the framework of the CQC’s framework, it is important to recognise that although the CQC specifically focuses on management and leadership within its ‘Well-led’ domain, in reality issues concerning management and leadership impact on all five domains, and right across the service.
And coming back to the question of taking control and staying one step ahead, this is really about managers and providers being more proactive when it comes to monitoring quality in their own services, rather than simply waiting for the next CQC inspection team to come along. One way to facilitate this process is to consider adopting a structured form of continual self-assessment, preferably based on the CQC’s own methodology. I suggested that using the CQC’s PIC system on an ongoing basis would be a useful starting point, rather than simply submitting information on an annual basis (the current minimum requirement). And the key point about using PIC is to ensure that you can evidence every piece of information that you submit to the CQC, preferably from at least three different sources. This will get you into the habit of thinking in terms of ‘evidence based QA’ (for want of a better term), rather than simply making vague, unsubstantiated assertions with regards to how safe, effective, caring, responsive and well-led your service is. There are a number of ‘tools’ available to help you with this evidence gathering process, or you can build your own using a simple spreadsheet.
Although the issues that I’ve raised in this book may seem quite daunting to many managers and providers, luckily there is a great deal of support and help available. With regards to all things CQC, the obvious place to start, of course, is the CQC’s website itself. Here you will find plenty of guidance material on the CQC’s inspection and monitoring process, as well as the latest news and updates regarding the work of the CQC. And, of course, there are plenty of inspection reports to look at. I recommend studying these in some detail because, as I’ve suggested in this book, it is possible to learn a great deal about the culture of a particular service by reading its CQC report. Furthermore, each service should have its own, dedicated CQC inspector allocated to it, and they can be an invaluable source of information and support.
There are also many quality assurance services and consultancies that can offer support with QA and CQC issues. Each will have its own particular set of interests and specialisms. Perhaps not surprisingly, my own consultancy, Therapeia Ltd, focuses on quality assurance and the CQC within the wider context of culture, management and team working.
If you are fortunate enough to have your own quality assurance department within your own organisation, or even have a designated QA officer/manager, then they should be able to advise you on how best to go about addressing issues such as self-assessments, preparing for a CQC inspection, submitting information for PIC, and so on.
Finally, think about joining and participating in local, regional and national networks of managers and providers. These can be both ‘physical’, in the form of regular meetings, workshops and forums; and ‘digital’, for example, Facebook groups for care home managers, webinar conferences, and so on.
Copyright Leslie Chapman and Therapeia Ltd 2019
Visit Main Therapeia Site