Thursday, 24 December 2009

The continuum of spread: from outward communication to behavioural change

A few days ago I toyed with the idea of producing an FAQ sheet to cover the top 10 questions I am asked about how to spread good practice. Then I realised the answers are in my books and also in this blog. However, I'll revisit some of these questions over the coming weeks.

The top question is.... "how do I spread good practice"! And this usually sends me into panic mode. My immediate response includes:
What do you want to spread (and really, what, like skills, attitudes, behaviours etc)
How do you know it is good practice and who will recognise it as such?
Who is the target audience? Are you sure you are meaning individuals and not organisations?
What do you need people to do?
Why are you planning to do this? What is your stated and underlying intention?
I could go on...

I realise now that in most cases the big question comes from people in charge of something. Either they have the legitimate authority for a group (like Medical Director, CEO) or they are in a facilitative role (National body, program managers, consultancy). In both cases they want someone else to do something. The word "spread" gets used because either they have little authority over the people they want to change or they believe a softer-bottom-up-gentle approach will help people make the change. They may be right.

As the request to spread is coming from those "in charge" it is not surprising that the underlying question I think they are trying to ask is "How can we best communicate our good practice?". How do we communicate in a way that enables the right people to become aware of and do something about our topic? This is the standard approach to spread.

My experience in the last 10+ years on helping make large scale changes across healthcare systems is communication strategies and plans are not enough. Consider this continuum:

Spread (Communicate/Market/Advertise)--->
Communicate for behaviour change (Social Marketing)--->
Organisational change activities (OD, improvement projects)

Traditional spread programs limit themselves to clever and thoughtful (hopefully) design of outward communication. I say hopefully because in the most cases in healthcare the default button is set to "write an article" or "type up a case study".

As a minimum I feel spread needs to include the behavioural change aspect. Without this the communication is worth-less. The techniques from social marketing are a good place to start. This also means the behaviours required to change need to be identified as part of the "what" of the good practice.

Finally there is the perspective of the organisation. Again, my experience has led me to favour this approach. When the intended changes are part of or as a minimum identified as contribution to the organisation's purpose and objectives, with appropriate leadership support, then imprivement happens. Without this contextual setting the speed of adoption is much slower. Also, the breadth of adoption across the organisation may be limited. And there may be inappropriate adaptation to the good practice to such an extent that the intended benefits are not achieved.

So how do we spread good practice? I suggest the first place to start answering this question is to ask "why do you want to spread this good practice?". The aim here is to answer this in detail to get beyond the "to make things better" answer. This reframing will elicit the underlying motives and will direct an appropriate spread strategy.

Tuesday, 8 December 2009

Resources to understand tragedy of the commons

At the Institute for Healthcare Improvement's National Forum today, Don Berwick used the example of 2tragedy of the commons" in his Opening Plenary. His pitch was to encourage healthcare leaders to cooperate and work for the wider and more common good and to beware of continuing to work on a self-centred and healthcare industry basis.

For those interested in the concept of "tragedy of the commons" I have put some links in here where you can get more information.

The original article by Garrett Hardin in 1968 which introduced the concept can be found here.

A special issue of Science magazine provides not only all the critical papers you need to read but also links to excellent web resources. You can find this here:

Some games you can play to simulate and understand what is meant by ToC (though anyone trying to get onto the conference wifi will know just what it feels like!). You can access these here:

One of my favourite bloggers covered ToC a while back. You can read his post and associated comments here:

I've been a fan of Systems Thinking for years. There are loads of tools and techniques to understand how ToC occurs and how to work with it. I'm looking forward to healthcare improvers adding systems thinking techniques into the regular and perhaps rather technical improvement techniques. You can learn about ToC using causal loop diagrams here, and discover resources at Pegasuscom.

Any more - please add in the comments below.

Monday, 16 November 2009

Improvement Projects: Do no harm

A common issue raised by project managers who are trying to implement existing good practice with individuals and teams is one of resistance to change. I am constantly seeking ways to reframe the term "resistance" as a means of moving away from a potentially obstructive and destructive frame of reference.

I've been wondering whether one of the reasons people appear to "resist" adopting even what is well evidenced as good practice is because of a natural and at times perfectly reasonable conservative attitude towards risk. The medical profession has the theme of "do no harm". My feeling is often we are asking professionals to take on the solutions designed by others and in different contexts without providing the potential adopters with the evidence that the results are both relaible and generalisable. Reliable in the sense they can be repeated int he same context with the same results. Generalisability is what is proved when the intervention (improvement process) can be done in a different context and obtain similar results.

Without this evidence of generalisability in our improvement work I feel professionals will continue to be suspicious of changes.

In additon, do we ever publish the knock on consequences and the adverse effects of improvement work? A quick trawl of improvement projects published in high impact journals in the last 2 months demonstrates the attitude that improvement work is all good. None fo 12 papers that I looked at provided (or even hinted) at any negative consequences. Without honesty abotu improevemnt work and results I suspect we will continue to encounter "resistance" to change - and I will consider this an appropriate response to any solution being touted for implementation where there is no demonstartion of generalisability and no discussion about identified adverse consequences.

Thursday, 29 October 2009

Disseminating guidelines using MBTI Style

One of the eternal difficulties in the dissemination and application of clinical evidence is what I believe to be the " style gap".

Either they are researched and written so generically it is difficult for any individual or team to see how they apply to them. Or they are written so specifically that individuals and teams are so constrained to the detail and working out how they apply to themselves, they don't implement.
Different groups and organisations have developed their own ways round this problems. At a National or Regional level, boards and groups develop guidelines that are generic and then disseminate with a covering letter urging local adaptation of these guidelines. They know there will need to be local differences and contexts taken into account so they acknowledge this. This raises some questions for me:
  • to what extent does turning the generic into the specific mean the intended benefits remain?
  • are there different bits that can be adapted in different ways? Do the authors suggest how different bits can be adapted?
  • what are the systemic links with other pathways, clinical areas etc that need to be taken into account?
  • what are the contextual variables that are necessary for the generic guideline to be implemented (things like resources)?
  • Where to start? Something practical?
For those guidelines which are so specific as to be overwhelming
  • how do all these details scale up into themes and topics?
  • what are the patterns and links to systems that will be useful to know about?
  • which if the details are most important? Which ones can be left out and the main benefits are still reached?
  • is there a specific order to implementation?
  • how to the parts integrate with other systems like IT and HR?
I don't know the solution to this. What I do know is that many clinical and process guidelines are written in the MBTI (Myers Briggs Type Indicator) N/Intuitive style. This is the big picture, system and pattern way of seeing things. In contrast many of those required to implement these guidelines are more comfortable working with details (MBTI S/Sensing) and make sense easier of instructions if they are practical and specific. Sometimes the reverse is true - S's develop guidelines for N's to implement.

All the other parts of the MBTI styles could be a factor in the adoption of messages that change personal behaviour.

Perhaps one way round this is to use dissemination processes and content in a way which best suits the style of the potential adopters rather than the comfort of the authors.

Tuesday, 20 October 2009

Keeping track of progress; learning from DHL

It's a long story, but the short version ends with my handbag needing to be sent from Stockholm to Buckinghamshire in England. This experience has left me wondering why it is we find it so difficult to keep track of patients within a hospital, let alone across systems.

From time of pick-up to signature at home I could watch the 24-hour journey unfold (one click on a weblink, no data entry). I could see what action was being taken at each stage. If you're interested you can see the detailed information below (it's not the greenest of journeys...). It took only 2 minutes from the time of signature for the information to appear on the system.

Yes, patients are not parcels. Patient information also requires a certain degree of confidentiality management. However, I wonder what it would be like if within hospitals (let's start somewhere simple) we were able to keep track of the inpatient, figure out in which corridor they are now, how long they have been waiting for their scan, whether they have had their meal etc. This information will provide insight into the systems and the patient's experience. Maybe we could start by monitoring blood samples in this way as they are already bar coded. Maybe we could start with a system to help keep track of patients in the hospital for who speaking is difficult, such as those with dementia, stroke or some other disabling condition.

I wonder what else we can learn from DHL about how to monitor and improve pathways using technology?

585907200 - Detailed Report
Date Time Location Service Area Checkpoint Details
Oktober 18, 2009

Arlanda - Sweden Försändelse hämtad
Oktober 19, 2009

Arlanda - Sweden Processed at Arlanda - Sweden
Oktober 19, 2009

Arlanda - Sweden Skickad från Arlanda - Sweden
Oktober 19, 2009

Arlanda - Sweden Anlänt till DHL i Arlanda - Sweden
Oktober 19, 2009

Arlanda - Sweden Lämnat avsändare
Oktober 19, 2009

Arlanda - Sweden Processed at Arlanda - Sweden
Oktober 19, 2009

Arlanda - Sweden Skickad från Arlanda - Sweden
Oktober 20, 2009

Leipzig - Germany Anlänt till DHL i Leipzig - Germany
Oktober 20, 2009

Leipzig - Germany Processed at Leipzig - Germany
Oktober 20, 2009

Leipzig - Germany Skickad från Leipzig - Germany
Oktober 20, 2009

London-Heathrow - UK Skickad via London-Heathrow - UK
Oktober 20, 2009

London-Heathrow - UK Skickad från London-Heathrow - UK
Oktober 20, 2009

London-Heathrow - UK Anlänt till DHL i London-Heathrow - UK
Oktober 20, 2009

London-Heathrow - UK Processed at London-Heathrow - UK
Oktober 20, 2009

London-Heathrow - UK Skickad från London-Heathrow - UK
Oktober 20, 2009

Gatwick - UK Anlänt till DHL
Oktober 20, 2009

Gatwick - UK Ute för leverans med kurir
Oktober 20, 2009

Gatwick - UK Signatur

SMS Texting Campaigns; awareness to action

Mobile phone technology has the means to change lives. A number of campaigns have been running where SMS/texting technology is being used not to raise awareness but rather to deliver action. I'm interested in this as it is breaking some of the "communication rules" and what is in the old research about how ideas spread and are adopted.

An excellent example is from UK Transplant where there are a number of campaigns running to increase the number of people prepared to donate tissue and organs. In the South West of England a campaign is running until April 2010 combining regular advertising and sms texting. The posters create awareness and then if anyone standing int he bus shelter wants to act by registering on the UK Transplant Organ Donor site they can send a simple test to a number with the word GIVE. This is still a pilot and the resulst will be interesting. I am all in favour of innovative ways of moving from awareness to action and this method is modern and relevant to societal trends.

If you want to add your name to the register then go to become a donor
If you want to read about this campaign go to sms campaign

Wednesday, 23 September 2009

Adoption of guidelines: trust?

A perennial issue on spread and adoption is how clinical guidelines are adopted - or not. There is an industry researching what happens, what might be useful intervention and what you can then do to increase the speed and amount of adoption.

My own rather accidental piece of "research" raised a new question for me: to what extent is trust both an enabler and disabler of the adoption process?

So I've been cooking. The real thing, with recipes. I spent the better part of a month working through many of the Women's Institute favourite 650 recipes. A trusted cookbook and mostly failsafe. What I found is it is incredibly difficult to follow a recipe down to all the details. The more I used recipes the more I found myself adapting them - to varying degrees of success.

Then I moved on to Nigella's Express cookbook. A week later I noticed I was slavishly adhering to every minute detail. Not like me at all. So what was going on here?

Nigella writes in the first person and the way recipes are written has engaged me in a new way. When a note in brackets suggests options, reassures you that she really did mean 250ml double cream or reinforces why this step is important, then it seems more trust in the process is repaid by more attention to detail in following instructions.

Many clinical guidelines are produced by committees. Yes they may be great sets of instructions, however, are they written as mechanisms that attract trust? I wonder what would happen if a trusted peer rewrite guidelines in a personal language and tone. Would they be adopted more quickly?

I did think maybe this is about personal choice of style of communication. Maybe. And should that detract from finding some novel ways to encourage the adoption of guidelines?

Monday, 14 September 2009

Stories and Examples are different

We know that stories engage hearts and minds more than bullet points on a PowerPoint or a three page proposal. I believe there is a very important difference between stories and examples.

Stories are personal. I know a story when the person telling it comes alive with the emotions attached to the story. I can feel their passion, enthusiasm, sadness, delight - whatever. I am drawn into their personal experience. It is stories like this that engage me.

When someone stands on a stage and tells someone else's story - then for me that is an example. No matter how much we resonate with the other person's story their passion, enthusiasm, sadness, delight - whatever - is unlikely to be captured by the example-teller. Examples are helpful, though I suggest they are no different from the paragraphs in papers. They are second-hand and no longer associated with the context and emotions that go with them.

I am sometimes asked if a story I have used can be used by someone else. I usually recommend they find a way of developing their own reservoir of stories. This is sometimes difficult. Those who develop and advance theories may find their work disconnected from the reality of practice. The best way to both test the theories and develop your own stories is to test them out on a small scale. Not only will this provide self-confidence, it will also demonstrate the practicalities of your theory or suggestion and increase your personal credibility as you have a personal story to tell.

In the event you are unable to experience your own story, then I suggest capturing someone else's using a short video. Then allow their story to be told as they wish.

In the next week, try to focus on gathering, maintaining, treasuring your own stories. If you find yourself giving an example - telling someone else's story - then take a breather and see how you might do something differently to get the outcomes you desire from your listeners.

Photo from

Wednesday, 2 September 2009

Knowledge Management on my iPhone/iTouch

A key aim in healthcare is to reduce variation. One of the methods to do this is to encourage the sharing and adopting of good practices.
  • Problem 1: where is the good practice and who is doing what?
  • Problem 2: how do I get to hear of good practices?
  • Problem 3: how do I share what I am doing?
If data is information and knowledge is information we can use to make decisions, then how can data about good practices be shared.

A free application called "Healthmap: Outbreak near me" for the iPhone/iTouch has got me thinking. You can submit data about any disease and the location. This gets added to their database. You can see on a map any number of different diseases from Swine Flu through to African Horse Sickness. You can search by location, keyword or disease. If you allow it, the system can let you know what diseases are currently prevalent in your current location (using the GPS function).

So my please is for someone to develop the application for say, diabetes or cardiac care. I want to be able to find who (locally is great) is doing well on these healthcare processes and has something to share. Maybe even a version for patients where we can see the currently publicly available healthcare data on a map and in a searchable way. Also, we could add comments about services using the system.

The technology to do all this is here and available. As with so much I encounter in healthcare I suspect it may be another decade before we see something like this used.

Can someone prove me wrong please?

Tuesday, 1 September 2009

When we adapt do we spread & implement effectively?

The script often spoken and written by people involved in the spread of good practice goes along the lines of "we need to customise the process / protocol / idea so it fits best in our context", or "we need to expect the process / protocol / idea will be customised".

Part of me fully support and understands this. Yet another part of me is questioning what we mean by adaptation. When we use the term is it because:
  • we didn't have the time and/or inclination to discover the important contextual variables and then design with and around these
  • we are so in love with our solution (see earlier post about "inventoritis") that we expect others to copy it as it is, or maybe with just a few small tweaks
  • we are too afraid to work through the adaptation process and how the solution might be adapted because we may discover the desired outcome may not be achieved
  • we can't figure out how another place or team might use the process or idea so we defer to adaptation as the way round this
  • we know the new process will require quite a lot of facilitation and support to make it happen so we use adaptation as a means for engaging others (so they don't think they are adopting someone else's idea) and as a means for garnering implementation support
  • we can spread partly formed ideas and processes, or ones still in their innovative design state

So what is the adapting process? In a foreword by Richard Dawkins in Susan Blackmore's book about memes, is a couple of examples which got me thinking.
  1. Are you expecting a copying process, knowing there will be some natural adaptation. Dawkins uses the example of copying a picture. One person copies a picture, passes to another to copy and so on. After a number of copies the picture may not resemble the original very much. In fact, I suspect some may start to put their own context, thoughts and ideas on the picture, thus rendering it something different both in visual status as well as in meaning.
  2. Do you intend someone to copy instructions? If I am shown how to make a complex origami figure using a set of 30 simple instructions, then I can teach someone else, using the same instructions. That person can then teach someone else and so on. In this case, most of the time, we can posit that after 20 teaching/replications the origami figure would look the same. By focusing on the instructions then someone can even correct a minor slip when they make their copy. However, if once of the instructions gets left out and this omission is replicated then the paper figure will end up an entirely different shape.
So this brings me to issuing clinical guidelines and the expectation of their adoption and use, and sometimes adaptation for local use. Some questions I have are:
  • Do we know what happens when we issue guidelines and say "may them local". To what extent do they match the fidelity of the original in terms of outcome?
  • What happens when one of the guidelines instructions is omitted (accidentally or purposefully)? How much of the original outcome is retained?
If you have any thoughts on this topic of adaptation them please comment or email me.

Monday, 17 August 2009

Books on Lean for Healthcare

I am sometimes asked to recommend a book on Lean. On my book blog I have given one a short review this month. Others on my shelf include:

The Elegant Solution; Toyota's formula for mastering innovation, by Matthew May
Has some good hints, tips and checklists in it. I particularly like the IDEA (Investigate, Design, Execute, Adjust) process and form. For me this has more applicability to many projects I work with in healthcare than they way PDSA's are used (probably inappropriately) to write up projects and action.

Journey to Lean; making operational change stick, by John Drew, Blair McCallum, Stefan Roggenhofer
I like this book because it sets out the hard work involved. The authors also do the full set - namely, they include and emphasise the behavioural and mindset changes required. The use of stories and examples is helpful in putting the change processes into context.

Lean Hospitals; improving quality, patient safety and employee satisfaction, by Mark Graban
Very much a lean theorist book. Lots of Japanese terms. I liked the way the book is framed specifically for healthcare and this may make it easier to grasp some of the technical language and methods. I was a bit disappointed that the focus is so much on the technical and less on the social / cultural / behavioural aspects.

Sustaining Lean healthcare programmes; a practical survival guide, by Mark Eaton and Simon Phillips.
I reviewed this book this month. Short, sweet and worth it for the glossary of definitions! Simple and most useful for beginners or for sharing with other staff who need to know a bit about Lean but don't need to be overwhelmed by too much detail.

Lean for Practitioners; an introduction to Lean for healthcare organisations, by Mark Eaton
Another short and sweet book. If you only have 45 minutes on the train to get up to speed with some Lean Thinking - then this book will work for you. Mark's style includes bullet points, short paragraphs and checklist - worked for me.

Lean for Dummies, by Natalie Sayer and Bruce Williams
Probably the most useful Lean book I have but I am too scared to walk around carrying it! Rich in detail and an excellent reference.

Friday, 14 August 2009

Fear of failure? Never Events need a target of zero

So how would you feel if you were the patient of a surgical team whose improvement target for the "Never" event of wrong site surgery was - to reduce by 20%? A "Never" event is something that, by definition, should never happen. The NHS in England have a list of 8 that are so well evidenced they should be adopted for implementation with no further discussion.

Why then are we seeing targets for Never Events that are not zero? My recent experiences suggest the following dynamics at play:
  • zero is too difficult a target to achieve in healthcare (try telling that to the patient who has just had their wrong kidney removed or the suicide that happened when on 1-1 watch and using non-collapsible rails). Admittedly in some healthcare processes zero may be a tough tartget, however, Never Events mean never.
  • we may not meet the target (fear the failure and the knock on consequences for individuals, teams and project work); when I encounter this I realise I am working with an individual or group who fear the failure of improvement greater than they feal the failure of harming a patient.
  • it's not worth all the changes for zero (the costs of the change outweight the cost of full redeuction of a never event); this is, of course, a judgement call and my hope is it is made with full data analysis and consultation of those involved. Some never events happen so seldom it may be difficult to justify the changes required.
  • we don't believe the research is good enough; the NHS in England (National Patient Safety Agency) have streamlines a varietyof Never Event lists to determine a core 8 which are well proven in all aspects. More research I suggest is not required.
  • we will take a while to get to zero; that's ok, then let's see a desired outcome of zero and some leeway to reduce over time. The trick is perhaps not to design for a 20% reduction in year 1 but rather to design for zero in year 3 and monitor progress over time.
Never events need targets of Zero - 0. They should never happen.

Monday, 10 August 2009

Validity, reliability & generalisability of project results; the science of improvement?

I am aware that the use (and sometimes invention) of management gobbledygook words to describe actions and intentions regarding quality improvement may sometimes be more of a hindrance than a help. After an inspirational week spent with the Veterans Affairs Quality Scholars in Vermont I got thinking more about this.

"Spread" and "Sustainability" are my two pet problems as words. Spread is difficult to describe and many people use it in different ways and for different dynamics, thus creating more confusion. I have worried for a long time that there is no such issue as sustainability in QI if we are doing proper continuous improvement.

So I wonder what reframing and mindset shifts we get when we use words from the science discipline - after all, many call what we do the "science of improvement".

Generally in science validity measure the extent to which the test. experiment or method actually does what it has been designed to do.
I wonder how many improvement projects get "good results" yet the overall aim is not identifiably or actually reached. For example, a project designed to length of stay in hospital may use average LOS to measure. Over 6 months this may show an increase, despite much work. As the average includes the denominator of beddays, what may have happened is other work on prevention and lowering readmissions rates has decreased the total number of beddays. Apart from average not being a good improvement measure, if is possible the actions taken for LOS were not focused on the overall aim - which could have been to reduce cost, change the experience for the patient etc.

In statistical terms this questions whether the sample used exhibits the characteristics of the population.
This may be one of the reasons why spread does not happen. We choose populations outside the norm (people willing to change, where the context is prepared, give them help etc) and then when they get good results we require the "norm" to copy them. In many cases the results, the change process, the toolkit produced is designed for a very small population and has poor validity across the wider intended adopting group.

Statistically this is the amount of credence placed in a result; the precision of the measurement as repeated over a specific period of time.
For improvement projects that use control charts reliability will show as the extent to which the process is controlled. On a more macro level, reliability of a measure is weakened when the measurement method changes over time or when the measurement is open to "gaming".
The most popular measures for gaming (and I think lacking in reliability) are ones like 95% of patients to wait no longer than 4 hours.

Reliability can also mean the probability that a measurement (or intervention) will perform its intended function over time and within a given set of conditions
This definition reminds me of the talk about "sustainability". If results drop off or the way of measuring becomes "unsustainable" - usually due to other changes in the system, then the problem may be more one of design than a loss of momentum (or whatever way you conceive of "sustainability").

To draw specific inferences; to make generally or universally acceptable
This is about demonstrating that the improvement work carried out in Ward 10 is applicable throughout the hospital. To what extent can the other 15 wards copy what has been done and get the same result? In my experience we often end up with say half the other wards adopting something and of those half not all of them get the same results as the originating ward (they may in fact do better). This is about spread, and to effect generalisability the originating project needs to be able to describe their contextual factors and anything that may be contributing to their results. Without this, adopting teams, and management who would like the work adopted, would have little knowledge as to the generalisability of the work.
I also see a lot of "the results from hospital A was a £10,000 saving. This means all ten of our hospitals can achieve a total of £100,000 if they did the same." I suggest calculation is meaningless without a demonstration of the probability of generalisability.

The SQUIRE Guidelines have been developed to help overcome some of the lack of rigour in publishing improvement work. In particular they address the contextual / generalisability issue.

Thursday, 6 August 2009

Projects need visions to manage uncertainty

I've never really gone for the whole vision "stuff" as the process often feels a bit unfocused and the outcome too non-specific to be of much use. However, today something clicked for me which made sense. I realised the purpose of the vision is to provide a map for the swampy area of our projects.
What is the swamp? It is where we are thinking about the future (time beyond immediate knowledge) and where this time intersects with large scale system change (lots of people, lots of stuff going on..).

If I have the desire to change a process which is mostly down to me and something I can do now, this feels a fairly certain and concrete process. It gets a bit more complicated if I think further into a future and also involve a wider group of people - team, department, organisation, community etc. At some point the amount of variables involved increase beyond the point of managing the process and even conceptually controlling the desired outcome. here I enter the swamp. A vision, therefore is the map I carry with me which has an outline, a representation of what this future holds for me and all the people also impacted by it. This helps me make sense not only of the future I am part of designing, but also of the present; it enables me to contribute to the uncertain future through my more certain actions right now.

Tuesday, 4 August 2009

Why use percent and average for improvement projects?

It seems that every healthcare quality improvement presentation I hear nowadays uses averages and percents as the mechanism for demonstrating an improvement was made. I have some problems with this:

  1. There seems to be a confusion between performance management and measurement for improvement. Yes, the organisation may need to report on the % of x - done mostly so someone in charge can make a comparison. However, this is no reason to replicate this measurement in the project. For instance, if you're working on length of stay (LOS) then you may find mode (most frequently occurring number) is more helpful in demonstrating a change consequent to your improvement activities and it has the benefit of indicating the experience for most patients.
  2. When a nurse pointed out to be that they do not have .32 of a bed then I took notice. Of course she was right. So some stats that showed an average of 12.32 beds were to be shifted each month (a specific project) I could see this was nonsense. Statistically it could be argued this made sense, though as a mechanism for engaging staff, working in whole numbers, whole beds, whole patients, tends to make more sense.
  3. So ward A delivers a 4.18 average length of stay. Management now want all wards to achieve this (let's assume most are higher than this). So the processes and procedures underway in A is replicated to others (or attempted). The difficulty is their case mix may be different, their problem may be one where only their long stayer need to be addressed (they have the same mode but ave stats skewed the total figures) etc. In aiming for an average do they figure out the mathematics of LOS - x number can be 4 days, y number can be 5 days etc? For this is what the average leads them to - to game.
  4. An average, is, by definition, an abstract concept that assumes half the values will be above the line and half below the line. Do you really want to have half your experiences, interventions etc be more than the agreed number?
  5. One more measurement challenge. What happened to the 100% (or 0%) target? The common version is 98% of y or 95% of z. My logic suggests we are designing systems for a percentage failure. This is a tough call in healthcare. Which 2% of patients will you choose not to have optimal care for their diabetes?

Tuesday, 28 July 2009

Stimulating the desire to share

So I've been taking notes for a few weeks on why it is some people are really attracted to share some things, others are attracted to share very little. What is it that stimulates one person clicko n the "share" button on You-Tube or any other social networking site? What prompts someone to forward a useful article? SO the notes I've been taking are around my own behaviour and desire to share. I am wondering how much of this is generalisable to other people and contexts.

  1. Relationship building: sometimes I share something because I am aware it is part of continuing a relationship with someone. This can range form the more formal paper I've read that I hope someone else will also find useful, though to something more random like a video clip on You-tube that I thought someone in the family might find amusing. My sharing isn't something planned - it just sort of happens and when I see something I will trigger a "share". This made me think whether "relationship building" is something we consider or design with in mind when developing change programs.
  2. Unknown sharing: this is a bit like the blindspot in the Johari window. I realised that I am on many social and business networking sites. It is possible for me to share a message, say on Twitter, and then this is retweeted (forwarded on to others). I have no idea who the people are, though Twitter is useful as it is possible to track the sharing - not so with email. Sometimes the only way I know something has been shared is when I get an email form someone I have never heard of yet they are commenting on something I said. This is a very powerful dynamic at work and I wonder how much this is taken into account when developing communication strategies? I've learnt how random this process is. Got to be in there and involved to have any sense of how the internet can generate both velocity (speed) as well as scaling up of a message.
  3. Problem shared is a problem halved: I have found sharing problems and looking for support and answers from others is an important strategy for me. I use LinkedIn to pose questions and also reach out to colleagues. This means I have to describe my problem - this process of sharing usually means I end up part solving the problem as I do the definition! So I wonder whether in our change efforts we spend too much time requiring solutions to be articulated when one strategy may be to help people both find words to their problem as well as find someone to whom they can reach out.
  4. Random sharing: ever found yourself telling your life story to someone sitting next to you on the bus and who you only met 30 seconds ago when you sat down?! Well, not quite as extreme, though I have found myself sharing all sorts of things with people which don't fall into relationship building (except perhaps very short term) or as precisely as problem solving. There is something opportunistic here. So I wonder whether we can help more of these opportunistic meeting happen in the workplace?
  5. Good idea promotion: yes, I did do some promotion activities, sharing good practices I have seen elsewhere and which may be useful to the person or team I was working with at the time. However, this was only a very small percentage of my "sharing" time. Standing on a stage and telling stories also counts a bit of good practice sharing, though again this was a small part of my sharing activity. So I wonder whether we are over cooking the need to share good ideas?
So what I discovered by my own behaviour is I have not consciously set out the share good practice and good ideas. It is a lot more random than that. Also, my desire to share is really quite complex. OK, I am only a sample size of one - if you have stories about your own desire to share then please write a comment below.

I will be doing some thinking before designing the next communication and "spread" intervention I am involved in.

More on the fallacy of the tipping point

Last time I wrote about the fallacy of the tipping point I received a host of comments asking me to defend my thoughts a bit more. I ended up in a number of email conversations, some of which have persisted over the last few months.

The debate is essentially a Gladwell (as "son-of-Rogers") versus Watts debate if you want the protagonists' version. I see it more about Taylorist approach versus the networked world. Gladwell, like Rogers, has pursued very eloquently the theory that messages spread through a hierarchical type of system. It is based on a number of key influencers who spread the word. This is what he calls the "Law of the Few" which posits there are a few specifically influential people who spread the word. While there is some social system consideration in this approach it is essentially, to me, one of control and organisations. Maybe this is why so many leaders and organisations have fallen in love with the theory of the tipping point - it gives them a handle on which to explain their process driven communication plans.

I have felt for a long time that Gladwell/Rogers is way out of date when it comes to communication. Much of the arguments Gladwell use come from the 1990's - and can you remember how you used to communicate on both a small and large scale back then? Much of Roger's research was done pre-internet.

So, on the other hand, we have Duncan Watts who is approaching the debate from a networking perspective. His work demonstrates we are as likely to get information form a fairly random contact as we are from a "key influencer". The Watts approach to creating a viral experience for a message is harnessing the power of ordinary people's networks and strategies. Sure, there may be some influentials who spread the word, however, what Watts is making us think about is this may not be the only reason message spread. Gladwell's theories and examples are presented as a hub and spoke model - whereas Watts uses a more networked model which to me seems to more accurately reflect my world and experience. The difficulty of course is that it is not as easy to design a message spreading program is we believe the spread is more random than controlled.

One example both have used is the famous 6 degrees of separation one. Each has taken his own perspective. Gladwell repeats Milgram's results. Watts tested the theory using email and found that only 5% of the messages spread through what might be called Influentials - the rest was down to ordinary and perhaps less connected people.

If I had to invest in a spread strategy I would be looking to create learning communities, to be harnessing existing online connections, finding ways to get the messages into the email systems, developing content that worked on an online format etc. I used to be sold on the Rogers then Gladwell format but my own action research has led me to lean more to Watts.

Monday, 22 June 2009

Large scale change through small interventions

The phrase "large scale change" seems to be flavour of the year. What I am learning is that it means different things to different people, even when they are working in and on changing the same system. Something here about perception and definition I think.

While large scale change is in the press and on the lips, "small scale" changes or tests of changes seem to be dropping out of fashion. For those who have cut their improvement teeth in the realm of the PDSA cycle will know about the limitations and also the value of small scale changes.

So where does small scale and large scale meet?

I am finding it useful to focus on the the words impact and implementation. How can I achieve a large scale impact (maybe it is possible to have a large scale change yet insufficient impact)? How can I implement large scale change? So let's use large scale impact and large scale implementation as a way to define our change intentions.

There are many ways to achieve a large scale impact. My search is for the small scale changes that can have a large scale impact. For me this means getting the most efficiency, productivity, quality, outcome, gain, benefit (whatever your choice of measurement is) from the least amount of effort and disturbance to the system.

This is about paying attention to the what appears to be the small, boring and probably not award-winning changes that can make a difference. Focusing only on the bright and shiny changes can have many adverse consequences. Yes, it's good to talk of transformation, though that does depend on your perspective of the system. From an objective, outsider view that seems a rational task to accomplish. From the insider, subjective perspective this can creative unnecessary disruption. Also, transformation as a rhetoric provides little clue as to the how of transformation (see Weick,2000 for more info on this dynamic). It is possible that accumulated small changes can result in transformation. However, in the present financial situation for most organisations I am seeking large scale impact with small scale effort.

How can we determine whether the small scale change will have a large scale impact?

Looking at Impact (high/low) and Volume (high/low) for a specific proposal can help determine where to start, where more data analysis is required and where to put organisational change resources.

I am specifically nervous about what looks like a high impact yet is over a very small volume. One of the adverse consequences of implementing change here is regular, routine and predictable processes can be disrupted on the pretence of improvement. Pretence because it may be that the reasons for the change, for example a small patient group that needs additional time or never seem to be treated the "best" way, come about because of trying to solve the wrong problem. It could be this patient group needs extra special care. One way to get this is to look for small improvements that can be made over the large more predictable groups. By doing this more resources can be freed up to work with those who, by definition, will always be special cases.

In the current financial context I am hard pressed to agree to work on any change process unless some basic data analysis has taken place which demonstrates it is both high impact and high volume. If not HI/HV then some explanation of the need to change and the knock on consequences of both doing and not doing the change would be helpful.

I have a growing list of examples where small changes can have a big impact. For example, how saving 2 minutes on this procedure here by using resources in a different way can save £500,000+.

If you have any examples like this let me know (respond to this blog or email me).

Weick, K. (2000). Emergent change as a universal in organisations.Breaking the code of change. M. Beer and N. Nohria. Boston, MA, Harvard Business School Press: 223-242

Tuesday, 2 June 2009

Selling ideas without application experience means low credibility

How do we get others to adopt new ideas? This is a question I am often asked. One way I find useful is to think about how I might not get someone to adopt a new idea. What can I do to stop it happening?

I was triggered into thinking this "negative" view when hearing a group of training course attendees debrief themselves after an event they had attended. The event was on a semi-innovative topic, with the participants exhorted to use a variety of new techniques in their work. It appears that those doing the training had not used or were not using at the time, the techniques they were espousing. This meant their own knowledge was superficial and lacked any credible examples, including the examples of what doesn't work so well. So the attendees I overheard all said they enjoyed the day and it gave them some new ideas, however, they were left feeling unable to actually carry out any of the techniques. They had received the equivalent of the management summary written by a technical writer and they wanted instead to be linked to the people who actually have experience of the techniques covered. When I asked whether they would adopt the techniques they all said "not until we see the organisation espousing them, use them".

This does give us a problem in how we get new ideas across to others. I find it useful to label courses, events, papers etc as "information" or "ideas only" events. Thus distinguishing them from the more "applied" events where we are expecting some demonstration and application of the techniques back in the workplace.

My caution is selling, training, espousing good ideas and theories - with the expectation others will use them - when the self (person, team, organisation) hasn't the experience of implementing them). This disconnect is obvious to the recipient of the message.

If we take a topic like protocols and guidelines or new theories on how to deliver improvement, this then emphasises the importance of having examples of good practice, examples from the people who have implemented it. Only they know the real issues in applying the techniques. Maybe we need to be patient in developing theory into practice and also be patient in finding and supporting those who are applying whatever it is we need to have in place. Without this, credibility of the espouser will drop off - yes, people will be entertained at workshops, but behaviour will not be changed.

Sunday, 3 May 2009

Social media kits: CDC as a good example

Whatever your position on H1N1 swine flu, there is no doubting the news spread fast across the world.  Many change programs would like to emulate this type of activity; it is the Holy Grail of large scale change.

The CDC (Center for Disease Control and Prevention) in the USA is an excellent example.  I challenge organisations to respond as quickly, thoroughly and extensively as them.  In the case of H1N1 they didn't have a vast amount of planning time, though I suspect they had some contingency plans in place, to come up with a range of social media / social marketing tools.  So the first point to note is these were available almost instantly.  No waiting 2 - 3 months for a communications and marketing department to negotiate with an IT department following the meetings of the Change program division etc.... in order to get something done.

Secondly the CDC appreciates and understands the value of social media in getting messages out and helping them to spread, hopefully quicker than the virus they are focusing on.  Their H1N1 swine flu social media page  has:
  • widgets; a whole variety to suits your own needs
  • mobile information
  • buttons
  • online videos
  • podcasts
  • ecards
  • rss feeds
  • twitter link
  • image sharing
  • as for social networks you can find them on Facebook, mySpace, DailyStrength

Using social media is not the only way to communicate.  However, well done CDC for showing the way, probably breaking some rules in the process, and demonstrating to laggardly public sector organisations and departments how social media can be of value and no great sweat to implement.

Monday, 27 April 2009

Innovation may be a barrier to improving healthcare

I know I am on a sensitive topic here. However, the angle I'd like to mention is one that concerns me. I worry it might be a case of the emperor's new clothes. So just in case, I'd like to strip the outer layer of clothing off something that we take for granted and assume is "a good thing". Innovation.

It was announced this week that the NHS in England will have a prize fund of £20 million pounds to go to individuals, teams or organisations who come up with innovations that make a substantial improvement to services. So what I am wondering, is whether this is helpful, or in fact whether prizes and the ongoing emphasis on innovation is helping deliver long term, sustainable, effective and efficient improvements in the delivery of healthcare?

Disconfirming question #1: Why develop more innovation when the current good ideas, innovations etc are used by so few? I wonder what the impact would be if £20 million pounds was up for grabs if you could demonstrate you have implemented an existing evidenced good practice? Do we need more activity on research and new stuff when we have lists and lists of practices that can be of benefit if actually implemented? If an innovation comes up with an idea to get evidence into practice across whole systems and large groups of people, without them really noticing it, then yes, I am behind it. But I am not behind single, one off innovations which are so off the wall that the normal healthcare population can't conceive of ever implementing them.

Disconfirming question #2: Why emphasise innovation and not research? Though a part of me is frightened that any more research will go over old ground - so maybe that is why an innovation focus may be better?

Disconfirming question #3: Why focus on starting something new when stopping something that doesn't work might have more of an impact. The BMJ published in 2004 a list of "bad ideas" or practises in general use which are no longer considered "good practice". These sorts of practices continue to mean large variations in care. So if the innovation rewarded is one which erasing the use of "bad ideas" then I'm all for it.

Disconfirming question #4: Why focus energy on creating new ideas when the same energy could be focused on activity around implementing known and evidenced good ideas. Before I turn entirely into a "grumpy old woman" I do see that innovation and creativity is good. However, as a taxpayer in our system I want to see action on what we know already can work. I suspect tough times like a recession will enable the natural innovators to do their stuff - and they will do this regardless of prizes or focus. It's the greater norm population that doesn't regularly do innovation that bothers me; how can we get action underway there?

Disconfirming question #5: What is innovation anyway? This could leave to a philosophical debate. It could be that anyone implementing an existing known-elsewhere practice will feel it is like an innovation when they implement it in their own context. I suspect this is not what the prize organisers have in mind.

Yes, I do understand innovation and the need for it. My feeling is the innovating population will innovate anyway - that's their nature. I want support, profile and focus on getting existing known practice into place and in stopping known "bad ideas". We could start with hand washing - or someone could come up with an innovative something that means hand washing is an irrelevant activity in the drive to reduce hospital acquired infections.

Friday, 24 April 2009

Alignment or attunement for large scale change?

(Photo by Goodshoot Photos)

The term "alignment" is often heard in leadership groups, team meeting and in programme documents. Do we mean alignment when we use it?

I came to this question of alignment vs attunement on reading a review on the book "Enterprise-Wide Change; superior results through systems thinking" and the review is here:

Most of us working in the Public Sector and in healthcare will recognise that we are working within living systems. The nature of people interacting, interfacing, creating and constantly altering interdependencies - all creates a perpetually shifting culture - with the consequence of perpetually shifting performance.

Dictionaries suggest the meaning of "alignment" is to arrange to in a straight line or in parallel lines. It is a term that comes from a mechanical and industrial age and is an important one in engineering.

In contrast, "attunement" means to bring into a responsive or harmonious relationship. I like this term. For your large scale change or mechanism to spread good practice, are you conducting a quartet, a 12 piece orchestra, a 120 piece orchestra or even a series of large orchestras all required to play the same tune in different places at the same time? Imagine helping each member of this orchestra to perform their best. They are professionals and know how to tune their instruments, the great music will come from arranging their performance in a way that it harmonious.

How do we lead professional individuals and teams so they use their skills and capabilities to their maximum, yet do so in harmony with others around them? I like to think of the attuning process as including:
  • future orientation; you may call this a vision, it may be a picture of what the future looks like, it may be taking time to feel what the end results needs to be like. An orchestra may listen to a previously recorded version. A programme team may visit a place where similar results have been achieved. It is about creating a collective sense of possibility
  • a high level plan; for the orchestra this is the score. This may look like detail though there will also be a high level interpretation of the score by the conductor. The leader cannot do the playing for someone else, they can only guide their interactions with others, to create the overall result
  • using individual excellence and surrendering this to the collective experience. By this I mean that it is essential for individuals to do their best, to work at their optimum, yet do so as a servant to the group. An individual may have a soloist part, though this is a contribution to the whole and is not the result in itself. In my experience we have many soloists playing well and being praised for their individual achievement with little leadership effort placed on containing these performances

When I am working with individuals, teams, organisations and systems in enabling large scale change my focus is not on seeing them in organised rows, neatly lined up. Instead my aim is to help them identify their tune, make conscious their personal capabilities and to discover ways to build responsive relationships and work in a harmonious way.

Thursday, 9 April 2009

Large scale change projects are seldom a linear process

In my experience any change process is a messy one where people are concerned. I can have a step-by-step guide to help me change a wheel on my car which is very useful. As the individual involved the guide can't really be designed to cope with my emotions following a flat tyre though fortunately the wheel, wheel nuts, jack and car itself tend to behave predictably and with a high degree of certainty - meaning the step-by-step guide is helpful.

Whether the project is small scale or large scale, there will be a varying degree of predictability and certainty when implementing projects within people-based systems. For large scale change where multiple projects will be run, the complexity increases. There are many different project management systems, some of which are specifically designed to cope with this messy process.

A linear version of project management assumes a sequence of steps, taken in a logical and predictable manner.

The diagram above gives an example of the stages and the effort over time. Reflecting on my own experience of implementing large scale change projects in healthcare I would plan for more effort in the conceptual phase and know that the time spent in each phase is not equal; it will vary according to each project and context.

While part of me would love to work with teams and organisations in an emergent way, to help improvement be revealed and new behaviours learnt and applied, I know that some project management is required when working within the structure of an organisation. One project management model I use is the following:

While this version starts to show some of the interdependencies of each stage in a project, there is always room to draw more lines. However, I like how this model shows the reporting / evaluating stage and how this feeds back into the planning stage. Namely this is an ongoing cyclical process rather than a linear step-by-step approach.

This model also works for me when the project is about implementing existing good practice in a different context. This model allows for the process and solution to be adapted so it work within the next context most effectively.

Monday, 6 April 2009

Presentations to watch; 2009 Quality & Safety in Healthcare - Berlin

Part of sharing good practice and enabling others is about just that - sharing. I was unable to attend the International Forum on Quality & Safety in Healthcare, run by the IHI and the BMJ, held in Berlin in March 2009. However, I can watch the plenaries and check out the posters. I can also contribute to ongoing discussion. It is really good to see what used to be "closed" and only for those who could afford to attend, is now more available and creative in the ways messages can be sent out and conversations continued.

Plenary 1 — What patient-centered care really means
Plenary 2 — Medical success leads to medical error: how health professionals accept responsibility for safety
Plenary 3 — Transforming whole systems: in search of theory and method

For all poster details and other resources:

To discuss the plenaries and posters:

Friday, 3 April 2009

Healthcare needs new norms to counter abuse of power

I believe the problem we’re facing in improving the quality and safety of healthcare is about setting new norms rather than the eternal quest to take innovative ideas or set of guidelines and then impose these on the rest of the system. Yes, this “pilot and spread” approach is useful and can be demonstrated to raise standards, though there may be questions about sustainability of results and few organisations review improvement programs three or more years after they were completed to test this.

Obviously new norms can’t be “set” as such. Where does our current norm come from? I think of a norm as the sum of all the behaviours actually at work in a system. A different norm therefore requires a different set of behaviours – not only (if at all) a planning meeting to decide and list these behaviours but an actual change in the behaviour of one, then two, then three, then four and so on people in the way they act and interact with each other.

More about this norm approach in forthcoming posts. In this post I am thinking more about why we end up with the norm we have. For example, what is the norm at work in an organisation where 400 or more patients are harmed or die inappropriately (for a series of reports / investigations on UK health organisations ) or where one nurse is able to harm and kill a number or patients No doubt there are many causes involved in each and every event. However, I’ve been asking myself the question “Why is it so difficult to shift the norm?” Allied to this is the question for me of “How can professionals reach the stage where they become part of a norm that seems to go against their stated values, yet do nothing – their behaviour continues “as normal?”

There are procedures for the NHS in England to manage whistleblowing and again, there will be many reasons why people don’t step outside of a norm and take action. So why is this norm “pull” so strong?

One reason I have been dwelling on is the abuse of power. Not so much a conscious step but rather one which is part of our human condition. And when enough people behave in ways where power is being abused, then a norm develops so others, who may not be inclined to do so, end up part of the problem.

How is abuse of power part of our human conditions? We know from a number of experiments that stated values, professional promises and personal beliefs can go out the window if the circumstances are right. For example, the Stanford Prison Experiment in 1971 ( for a slideshow / discussion guide on the whole process) demonstrated in laboratory conditions that when put in a position of power (the guards) about a third of the “guards” showed sadistic tendencies, meting out punishments and inventing ways to humiliate their “prisoners”. The famous Stanley Milgram experiment in 1961 showed how when participants are morally distanced from the consequence of their actions and when they believe in the power of the authority demanding action, they will continue behaviour even when they can see it is physically harming another person. Around two-thirds of participants showed this behaviour. There are many other examples as well, not least of which come from politics and wars.

So what does this mean for raising the standards of healthcare? Imagine working in an organisation where a critical mass of people (clinical professionals as well as managers and administrators) feel distanced from their actions, feel the need to respond to authority figures, are in a stressful context and feel they can act in ways that are driven by their own very personal demons. The sum of the behaviours exhibited become the norm culture.

So when I encounter really good examples of clinical or administrative practice and I am asked to help with spreading this to other places, then one of the key things I am thinking about is the underpinning behavioural dynamic. What is it about the team and the behaviour of the individuals in the team, that contribute to their identified quality / safety improvement performance? What is their norm? To what extent is this about how they manage the power dynamics? And if others are to adopt their work, what will this do to their use and abuse of power within their own systems? How will they break free of their current norm? Whose behaviour will be critical in this shift of norms?

I’ve been through the process of listing, rather objectively, the behaviours required for good practice to be adopted. Maybe this is helpful in working out just what needs to be done to effect the change. What I know now is a list of behaviours is not the same as the behaviour itself. I am also a great deal more aware of the context in which behaviours are played out and I am open as to the impact of power dynamics in a system.

This is messy stuff that doesn’t lend itself to the predominant method of change in healthcare, namely the issuing of a “how-to” guide. I think it requires conversation, dialogue, self-awareness and attentiveness. There’s no quick fix.

Thursday, 2 April 2009

Pull vs Push; Twitter Case Study - 90 day project

One of the mantras of working out how to implement good practice more widely is "pull don't push". This refers to the need for potential adopters of a good idea to "pull" the information to themselves and then to act on it. This is a very different dynamic from the "push" of sending out case studies, instructions or clinical guidelines and then expecting others to implement the recommendations. (I wrote a paper on this topic with Paul Plsek in 2003; S. W. Fraser & P. Plsek, "Translating evidence into practice: can it be done through the process of spread?" Education in Primary Care, May 2003)

Theory is often interesting and sometimes useful. Practice is usually difficult and never quite what you expect. Thinking about "push" vs "Pull" I decided to run my own 90 day project to see what I could learn about enabling "pull".

This blog is both about Twitter as well as about a 90 day project process. If you don't know what Twitter is then check out this video:

With Twitter you share information, 140 characters at a time, with those who follow you. You also get to "listen" to those who you follow, and if you like, you can forward on their messages to your own followers (called retweeting). It is easy to follow someone; do a search for names or keywords you're interested in and then click on the "follow" button. You may also see a message someone has put up and then decide to follow them.

But how do you attract followers? How do you get a pull? This was my project question.

90 Day Project Aim
To get my profile in the top 100 of England and top 100,000 of all Twitterers (of a population 2 million and growing). I chose a ranking rather than a number of followers as I felt this is more aligned with my intention and context. A bit like choosing to reduce hospital costs by having no infections, rather than counting the number of infections. One of the complications of measuring ranking is you have to keep up your position in an ever increasing pool (the denominator is increasing rapidly).

On 27th December 2008 I had 7 Followers. On 2nd April I had 347 followers. Average growth per day was 8 and the current trend predicts 536 followers within 30 days. I have used to measure progress. My ranking worldwide is 79,492 and within England it is 79th

  1. It really does work to learn from where other have gone before. I was 45 days into my 90 day project before I realised I hadn't practised what I preached, namely discover the existing good practice. Only then did I search for other's experiencing of generating a pull and adopt some of their ideas. This helped, though it was not enough.
  2. It is possible to get started (on Twitter and I believe anything else) without knowing exactly how you're going to do it. For me, the act of the 90 day project meant I had to learn how the technology and system worked. It focused my attention.
  3. Measurement is crucial. I check on a regular, sometimes daily basis to see whether my actions where having an impact. Ok, so this is easy when there are systems in place to do the measurement, though I would find it hard to know what was a successful strategy without this. After a while I began to see pattern in the data which matched my Twitter behaviour. Quite amazing really...
  4. A "pull" is about adding value. No-one is going to follow unless they have a reason to do so. Equally, they can unfollow at any time (and I had one wobble when the graph slipped back due to unfollows - largely due I think to me unfollowing a lot of people - we sort fo went into a negative slide). So I have started to learn how to create a pull through a virtual medium communicating only 140 characters at a time. So if this was a non-Twitter project I would still think about what value am I adding for others and how succinctly can I communicate with them in a way that works for them
  5. The social process of retweeting is important. I am valuing others' messages and sharing - then they do the same. As humans we are inherently social. Encouraging trust, openness, enabling and allowing connections and networking seems to be fundamental. Why should I expect anyone to follow me if I don't share, put others' messages forward (always attributed) etc?
  6. I put my Twitter link on my email signature, added a button to my website, integrated it with my three blogs. What I am seeing is the links between these online activities and how they feed one another. Using a variety of tracking software I can see which ones are triggering followership. Maybe for other 90-day change projects it is important to think widely and outside the direct scope of a project in order to influence change.
  7. One of the scary things about a pull vs a push is you're not in control. Yes, I can see who follows me and I can then choose also to follow them or not, I can also block people. However, by letting go I have discovered some new contacts, new people that I would not have encountered before. Yes, I have also encountered some fairly random followers who I see no reason why they should follow me. But who am I to judge? And that is the point.
  8. When it came to updates I tried to be regular and consistent (something every day or two), aimed not to overwhelm at any stage and to tread lightly.

What will I be doing in the future

I will continue with Twitter as I find it useful. I've now set a different goal which is about quality rather than quantity. I want to test the next stage of "pull" - where something gets acted on as a result of a "pulled" tweet. This won't be easy to measure. I was fascinated by the recent example of this type of "pull-action" from Stephen Fry and Twitter. At the time he had around 352,000 followers (he is in the worldwide top 20). One day he tweeted a link to an Open University website / game. This OU page / game then got 52,000 hits in a single day. Around 15% of his followers acted instantly on a tiny fragment of a message. And the numbers were big.

If 15% of clinicians, professionals and managers in healthcare acted almost instantly on a message they had pulled, I wonder what might be the impact? Scary, huh...

Follow me on Twitter: