Intervention effectiveness

what evidence is there for simulation as an intervention - i.e. how effective is it?

The Hierarchy of Intervention Effectiveness is a risk management theory, which rates interventions related to human behaviour toward the bottom of its scale in favour of technological interventions, which are viewed as more reliable and was developed by Vaida (1999), with an excellent diagram of the elements below is available here [REF 12].

1. Forcing functions
2. Automation, computerisation, and technology
3. Standardisation and protocols
4. Staffing organisation
5. Policies, rules, and expectations
6. Checklists and double-checks
7. Risk assessment and learning from errors
8. Education and information
9. Personal initiative – vigilance

But what does this mean - well... that depends on your point of view, but at face value, what one can take away is that interventions at the people level, i.e. teaching and training, is less effective, than changing a system, or culture. Why? People move, forget, change - all sorts of things contribute to the fallibility of humans. While fixing systems and processes allows some compensation for the humans in the system. Many will have seen how more checklists and care bundles are being introduced to correct behaviours and compensate for knowledge and performance deficits. Above that, practice, protocols and standardisation provide better care for patients.

How does one evaluate impact? Within healthcare simulation the main model used to evaluation simulation interventions currently is the Kirkpatrick model of evaluation. This model, developed by Kirkpatrick in 1959 has four stages, expanded to 5 by in to include return on investment (ROI).

Kirkpatrick's Evaluation Model

LEVEL 1 - REACTION: This represents the learner's experience of the intervention (satisfaction)
LEVEL 2 - LEARNING: This reflects the learner's acquisition of knowledge & skills.
LEVEL 3 - BEHAVIOUR: This asks whether learners are using what they learned practically.
LEVEL 4 - RESULTS: This stage reviews whether the learners output impact care outcomes such as cost, quality and efficiency of service provision.
LEVEL 5 - ROI: This additional stage asks whether the investments provided are providing a positive impact and therefore the intervention is either no longer required, or needs to adapt.

If Kirkpatrick isn't your cup of tea, what alternatives might there be out there to assess impact?

One alternative is the 7Is framework by Roland (2015) which highlights some limitations of Kirkpatrick's model - such that its focus isn't on healthcare, it is somewhat incomplete and assumes causality of higher levels on higher impact [Ref 9]. This new framework incorporates several concepts and produces a more linear model for evaluating impact:

7Is Framework

Interaction - The degree to which participants engage with & are satisfied with the instruction
Interface - The degree to which participants are able to access the instruction
Instruction - The details of the intervention itself
Ideation - The perception of improvement following the instruction
Integration - The change, in both knowledge and behaviours, as a result of the instruction
Implementation - Whether change across individuals has been demonstrated
Improvement - Whether the instruction has resulted in improvements in care & experience

Another is the 'Realist Evaluation Methodology' - this looks to use a realist viewpoint (between positivism & constructivism) with which the core question is: ‘What works for whom, in what circumstances and why?’ in a context + mechanism = outcome process [Ref 8].

Realist Evaluation Model

Stage 1: Formulate a working theory
Stage 2:
Hypothesize with C+M=O
Stage 3:
Test & Observe
Stage 4:
What did work & why; re-iterate and restart

So how does one demonstrate whether your intervention has reached a particular level of effectiveness? To find out more about how to best collect this information in more detail please see our Evaluations & Feedback page.

More learning resources are available below:

References:

  1. Kirkpatrick's Evaluation of Simulation and Debriefing in Health Care Education: A Systematic Review - https://pubmed.ncbi.nlm.nih.gov/29958308/

  2. A FRAMEWORK FOR EVALUATING SIMULATIONS AS EDUCATIONAL TOOLS - http://ion.ngarrett.org/Framework_for_Evaluation.pdf

  3. https://www.coursera.org/lecture/clinicalsimulations/kirkpatricks-framework-for-evaluation-NbogI

  4. https://www.researchgate.net/post/How_do_I_measure_the_effectiveness_of_simulated_teaching_and_learning_methods

  5. Using the Kirkpatrick Model to evaluate the Maternity and Neonatal Emergencies (MANE) programme: Background and study protocol - https://bmjopen.bmj.com/content/bmjopen/10/1/e032873.full.pdf

  6. 7 Habits of Highly Effective Sim Experts - https://www.laerdal.com/us/information/7-habits-of-highly-effective-sim-experts/

  7. https://search.proquest.com/openview/736c571f825a96350b005a6a43a7b1de/1.pdf?pq-origsite=gscholar&cbl=47628

  8. An overview of realist evaluation for simulation-based education - https://core.ac.uk/reader/161127448

  9. Proposal of a linear rather than hierarchical evaluation of educational initiatives: the 7Is framework - https://www.jeehp.org/upload/jeehp-12-35.pdf

  10. https://patientsafe.wordpress.com/the-hierarchy-of-intervention-effectiveness/

  11. Improving Clinical Communication and Patient Safety: Clinician-Recommended Solutions https://www.ncbi.nlm.nih.gov/books/NBK43654/pdf/Bookshelf_NBK43654.pdf

  12. https://www.longwoods.com/content/22845/healthcare-quarterly/from-discovery-to-design-the-evolution-of-human-factors-in-healthcare


Other

  1. Simulation versus lecture? Measuring educational impact: considerations for best practice https://ebn.bmj.com/content/19/2/55

  2. Establishing the effectiveness, cost-effectiveness and student experience of a Simulation-based education Training program On the Prevention of Falls (STOP-Falls) among hospitalised inpatients: a protocol for a randomised controlled trial https://bmjopen.bmj.com/content/bmjopen/6/6/e010192.full.pdf