Monday, 17 August 2009

Books on Lean for Healthcare

I am sometimes asked to recommend a book on Lean. On my book blog I have given one a short review this month. Others on my shelf include:

The Elegant Solution; Toyota's formula for mastering innovation, by Matthew May
Has some good hints, tips and checklists in it. I particularly like the IDEA (Investigate, Design, Execute, Adjust) process and form. For me this has more applicability to many projects I work with in healthcare than they way PDSA's are used (probably inappropriately) to write up projects and action.

Journey to Lean; making operational change stick, by John Drew, Blair McCallum, Stefan Roggenhofer
I like this book because it sets out the hard work involved. The authors also do the full set - namely, they include and emphasise the behavioural and mindset changes required. The use of stories and examples is helpful in putting the change processes into context.

Lean Hospitals; improving quality, patient safety and employee satisfaction, by Mark Graban
Very much a lean theorist book. Lots of Japanese terms. I liked the way the book is framed specifically for healthcare and this may make it easier to grasp some of the technical language and methods. I was a bit disappointed that the focus is so much on the technical and less on the social / cultural / behavioural aspects.

Sustaining Lean healthcare programmes; a practical survival guide, by Mark Eaton and Simon Phillips.
I reviewed this book this month. Short, sweet and worth it for the glossary of definitions! Simple and most useful for beginners or for sharing with other staff who need to know a bit about Lean but don't need to be overwhelmed by too much detail.

Lean for Practitioners; an introduction to Lean for healthcare organisations, by Mark Eaton
Another short and sweet book. If you only have 45 minutes on the train to get up to speed with some Lean Thinking - then this book will work for you. Mark's style includes bullet points, short paragraphs and checklist - worked for me.

Lean for Dummies, by Natalie Sayer and Bruce Williams
Probably the most useful Lean book I have but I am too scared to walk around carrying it! Rich in detail and an excellent reference.

Friday, 14 August 2009

Fear of failure? Never Events need a target of zero

So how would you feel if you were the patient of a surgical team whose improvement target for the "Never" event of wrong site surgery was - to reduce by 20%? A "Never" event is something that, by definition, should never happen. The NHS in England have a list of 8 that are so well evidenced they should be adopted for implementation with no further discussion.

Why then are we seeing targets for Never Events that are not zero? My recent experiences suggest the following dynamics at play:
  • zero is too difficult a target to achieve in healthcare (try telling that to the patient who has just had their wrong kidney removed or the suicide that happened when on 1-1 watch and using non-collapsible rails). Admittedly in some healthcare processes zero may be a tough tartget, however, Never Events mean never.
  • we may not meet the target (fear the failure and the knock on consequences for individuals, teams and project work); when I encounter this I realise I am working with an individual or group who fear the failure of improvement greater than they feal the failure of harming a patient.
  • it's not worth all the changes for zero (the costs of the change outweight the cost of full redeuction of a never event); this is, of course, a judgement call and my hope is it is made with full data analysis and consultation of those involved. Some never events happen so seldom it may be difficult to justify the changes required.
  • we don't believe the research is good enough; the NHS in England (National Patient Safety Agency) have streamlines a varietyof Never Event lists to determine a core 8 which are well proven in all aspects. More research I suggest is not required.
  • we will take a while to get to zero; that's ok, then let's see a desired outcome of zero and some leeway to reduce over time. The trick is perhaps not to design for a 20% reduction in year 1 but rather to design for zero in year 3 and monitor progress over time.
Never events need targets of Zero - 0. They should never happen.

Monday, 10 August 2009

Validity, reliability & generalisability of project results; the science of improvement?

I am aware that the use (and sometimes invention) of management gobbledygook words to describe actions and intentions regarding quality improvement may sometimes be more of a hindrance than a help. After an inspirational week spent with the Veterans Affairs Quality Scholars in Vermont I got thinking more about this.

"Spread" and "Sustainability" are my two pet problems as words. Spread is difficult to describe and many people use it in different ways and for different dynamics, thus creating more confusion. I have worried for a long time that there is no such issue as sustainability in QI if we are doing proper continuous improvement.

So I wonder what reframing and mindset shifts we get when we use words from the science discipline - after all, many call what we do the "science of improvement".

Generally in science validity measure the extent to which the test. experiment or method actually does what it has been designed to do.
I wonder how many improvement projects get "good results" yet the overall aim is not identifiably or actually reached. For example, a project designed to length of stay in hospital may use average LOS to measure. Over 6 months this may show an increase, despite much work. As the average includes the denominator of beddays, what may have happened is other work on prevention and lowering readmissions rates has decreased the total number of beddays. Apart from average not being a good improvement measure, if is possible the actions taken for LOS were not focused on the overall aim - which could have been to reduce cost, change the experience for the patient etc.

In statistical terms this questions whether the sample used exhibits the characteristics of the population.
This may be one of the reasons why spread does not happen. We choose populations outside the norm (people willing to change, where the context is prepared, give them help etc) and then when they get good results we require the "norm" to copy them. In many cases the results, the change process, the toolkit produced is designed for a very small population and has poor validity across the wider intended adopting group.

Statistically this is the amount of credence placed in a result; the precision of the measurement as repeated over a specific period of time.
For improvement projects that use control charts reliability will show as the extent to which the process is controlled. On a more macro level, reliability of a measure is weakened when the measurement method changes over time or when the measurement is open to "gaming".
The most popular measures for gaming (and I think lacking in reliability) are ones like 95% of patients to wait no longer than 4 hours.

Reliability can also mean the probability that a measurement (or intervention) will perform its intended function over time and within a given set of conditions
This definition reminds me of the talk about "sustainability". If results drop off or the way of measuring becomes "unsustainable" - usually due to other changes in the system, then the problem may be more one of design than a loss of momentum (or whatever way you conceive of "sustainability").

To draw specific inferences; to make generally or universally acceptable
This is about demonstrating that the improvement work carried out in Ward 10 is applicable throughout the hospital. To what extent can the other 15 wards copy what has been done and get the same result? In my experience we often end up with say half the other wards adopting something and of those half not all of them get the same results as the originating ward (they may in fact do better). This is about spread, and to effect generalisability the originating project needs to be able to describe their contextual factors and anything that may be contributing to their results. Without this, adopting teams, and management who would like the work adopted, would have little knowledge as to the generalisability of the work.
I also see a lot of "the results from hospital A was a £10,000 saving. This means all ten of our hospitals can achieve a total of £100,000 if they did the same." I suggest calculation is meaningless without a demonstration of the probability of generalisability.

The SQUIRE Guidelines have been developed to help overcome some of the lack of rigour in publishing improvement work. In particular they address the contextual / generalisability issue.

Thursday, 6 August 2009

Projects need visions to manage uncertainty

I've never really gone for the whole vision "stuff" as the process often feels a bit unfocused and the outcome too non-specific to be of much use. However, today something clicked for me which made sense. I realised the purpose of the vision is to provide a map for the swampy area of our projects.
What is the swamp? It is where we are thinking about the future (time beyond immediate knowledge) and where this time intersects with large scale system change (lots of people, lots of stuff going on..).

If I have the desire to change a process which is mostly down to me and something I can do now, this feels a fairly certain and concrete process. It gets a bit more complicated if I think further into a future and also involve a wider group of people - team, department, organisation, community etc. At some point the amount of variables involved increase beyond the point of managing the process and even conceptually controlling the desired outcome. here I enter the swamp. A vision, therefore is the map I carry with me which has an outline, a representation of what this future holds for me and all the people also impacted by it. This helps me make sense not only of the future I am part of designing, but also of the present; it enables me to contribute to the uncertain future through my more certain actions right now.

Tuesday, 4 August 2009

Why use percent and average for improvement projects?

It seems that every healthcare quality improvement presentation I hear nowadays uses averages and percents as the mechanism for demonstrating an improvement was made. I have some problems with this:

  1. There seems to be a confusion between performance management and measurement for improvement. Yes, the organisation may need to report on the % of x - done mostly so someone in charge can make a comparison. However, this is no reason to replicate this measurement in the project. For instance, if you're working on length of stay (LOS) then you may find mode (most frequently occurring number) is more helpful in demonstrating a change consequent to your improvement activities and it has the benefit of indicating the experience for most patients.
  2. When a nurse pointed out to be that they do not have .32 of a bed then I took notice. Of course she was right. So some stats that showed an average of 12.32 beds were to be shifted each month (a specific project) I could see this was nonsense. Statistically it could be argued this made sense, though as a mechanism for engaging staff, working in whole numbers, whole beds, whole patients, tends to make more sense.
  3. So ward A delivers a 4.18 average length of stay. Management now want all wards to achieve this (let's assume most are higher than this). So the processes and procedures underway in A is replicated to others (or attempted). The difficulty is their case mix may be different, their problem may be one where only their long stayer need to be addressed (they have the same mode but ave stats skewed the total figures) etc. In aiming for an average do they figure out the mathematics of LOS - x number can be 4 days, y number can be 5 days etc? For this is what the average leads them to - to game.
  4. An average, is, by definition, an abstract concept that assumes half the values will be above the line and half below the line. Do you really want to have half your experiences, interventions etc be more than the agreed number?
  5. One more measurement challenge. What happened to the 100% (or 0%) target? The common version is 98% of y or 95% of z. My logic suggests we are designing systems for a percentage failure. This is a tough call in healthcare. Which 2% of patients will you choose not to have optimal care for their diabetes?