Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool

John Unsworth, Andrew Melling, Jaden Allan, Guy Tucker, Michael Kelleher

Research output: Contribution to journalArticlepeer-review

27 Downloads (Pure)


Background: Failure to rescue the deteriorating patient is a concern for all healthcare providers. In response to this problem providers have introduced a range of interventions to promote timely rescue. Human factors and non-technical skills play a part in both the recognition of ill patients and in the delivery of interventions associated with their successful rescue. Given the risks to patient safety which failure to rescue raises, simulation provides a vehicle for staff training and development in terms of both technical and non-technical skills. This paper describes the development and preliminary validation of a human factors rating tool specifically designed to assess the non-technical skills associated with the recognition and rescue of the deteriorating patient. Methods: Using high fidelity simulation scenarios related to patient deterioration Faculty independently rated student performance. Scoring took place using video footage of the students’ performance. Data were analyzed to establish the validity of the tool, internal consistency between categories and elements and inter-rater reliability. Results: Content validity was established through a process of review and by checking for duplicate or redundant items. The internal consistency of the tool was acceptable with a Cronbach’s alpha of 0.84. Factor analysis suggested that the tool assessed only two components rather than the three hypothesized during tool development. The components were labelled as recognizing and responding and leading and reassuring. Inter-rater reliability was initially poor at 0.21 but following training of raters this rose to above 0.8 for two videos related to the same scenario one which had been used during training. However, when the scenario changed the reliability dropped to 0.5. Conclusions: Rescue appears to be a well-structured tool with good levels of inter-rater reliability following intensive training related to the specific scenario being scored. Further work is required to establish all aspects of construct validity and to ensure test-retest reliability.
Original languageEnglish
JournalJournal of Nursing Education and Practice
Issue number5
Publication statusPublished - 2014


Dive into the research topics of 'Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool'. Together they form a unique fingerprint.

Cite this