Intervention effectiveness

what evidence is there for simulation as an intervention - i.e. how effective is it?

The Hierarchy of Intervention Effectiveness is a risk management theory, which rates interventions related to human behaviour toward the bottom of its scale in favour of technological interventions, which are viewed as more reliable and was developed by Vaida (1999), with an excellent diagram of the elements below is available here [REF 12].

1. Forcing functions
2. Automation, computerisation, and technology
3. Standardisation and protocols
4. Staffing organisation
5. Policies, rules, and expectations
6. Checklists and double-checks
7. Risk assessment and learning from errors
8. Education and information
9. Personal initiative – vigilance

But what does this mean - well... that depends on your point of view, but at face value, what one can take away is that interventions at the people level, i.e. teaching and training, is less effective, than changing a system, or culture. Why? People move, forget, change - all sorts of things contribute to the fallibility of humans. While fixing systems and processes allows some compensation for the humans in the system. Many will have seen how more checklists and care bundles are being introduced to correct behaviours and compensate for knowledge and performance deficits. Above that, practice, protocols and standardisation provide better care for patients.

How does one evaluate impact? Within healthcare simulation the main model used to evaluation simulation interventions currently is the Kirkpatrick model of evaluation. This model, developed by Kirkpatrick in 1959 has four stages, expanded to 5 by in to include return on investment (ROI).

Kirkpatrick's Evaluation Model

LEVEL 1 - REACTION: This represents the learner's experience of the intervention (satisfaction)
LEVEL 2 - LEARNING: This reflects the learner's acquisition of knowledge & skills.
LEVEL 3 - BEHAVIOUR: This asks whether learners are using what they learned practically.
LEVEL 4 - RESULTS: This stage reviews whether the learners output impact care outcomes such as cost, quality and efficiency of service provision.
LEVEL 5 - ROI: This additional stage asks whether the investments provided are providing a positive impact and therefore the intervention is either no longer required, or needs to adapt.

If Kirkpatrick isn't your cup of tea, what alternatives might there be out there to assess impact?

One alternative is the 7Is framework by Roland (2015) which highlights some limitations of Kirkpatrick's model - such that its focus isn't on healthcare, it is somewhat incomplete and assumes causality of higher levels on higher impact [Ref 9]. This new framework incorporates several concepts and produces a more linear model for evaluating impact:

7Is Framework

Interaction - The degree to which participants engage with & are satisfied with the instruction
Interface - The degree to which participants are able to access the instruction
Instruction - The details of the intervention itself
Ideation - The perception of improvement following the instruction
Integration - The change, in both knowledge and behaviours, as a result of the instruction
Implementation - Whether change across individuals has been demonstrated
Improvement - Whether the instruction has resulted in improvements in care & experience

Another is the 'Realist Evaluation Methodology' - this looks to use a realist viewpoint (between positivism & constructivism) with which the core question is: ‘What works for whom, in what circumstances and why?’ in a context + mechanism = outcome process [Ref 8].

Realist Evaluation Model

Stage 1: Formulate a working theory
Stage 2:
Hypothesize with C+M=O
Stage 3:
Test & Observe
Stage 4:
What did work & why; re-iterate and restart

So how does one demonstrate whether your intervention has reached a particular level of effectiveness? To find out more about how to best collect this information in more detail please see our Evaluations & Feedback page.

More learning resources are available below:


  1. Kirkpatrick's Evaluation of Simulation and Debriefing in Health Care Education: A Systematic Review -




  5. Using the Kirkpatrick Model to evaluate the Maternity and Neonatal Emergencies (MANE) programme: Background and study protocol -

  6. 7 Habits of Highly Effective Sim Experts -


  8. An overview of realist evaluation for simulation-based education -

  9. Proposal of a linear rather than hierarchical evaluation of educational initiatives: the 7Is framework -


  11. Improving Clinical Communication and Patient Safety: Clinician-Recommended Solutions



  1. Simulation versus lecture? Measuring educational impact: considerations for best practice

  2. Establishing the effectiveness, cost-effectiveness and student experience of a Simulation-based education Training program On the Prevention of Falls (STOP-Falls) among hospitalised inpatients: a protocol for a randomised controlled trial