Understanding fall risk in real-world settings: a deep learning approach to video based environmental classification to enhance IMU gait assessment

  • Jason Moore

Abstract

Traditional approaches to fall risk assessment are limited. For example, subjective-based observation of a person’s walking/gait to understand fall risk is subjective, relying on the experience of the observer alone. Complimentary high-resolution digital technologies (e.g., instrumented walkway, 3-dimensional/3D motion capture) are used but are very costly and (often) limited to bespoke settings. Alternatively, inertial measurement units (IMUs) offer the same high resolution but are lower-cost and portable, enabling longer gait assessment in almost any setting and therefore more insight to habitual fall risk. However, IMU-based data alone lacks contextual information to interpret arising gait characteristics to inform fall risk. This thesis proposes a novel approach to better inform fall risk from IMU-based gait assessment. Specifically, it suggests and recommends use of wearable eye-tracking camera-based glasses to provide contextual information. Equally, the approach also recommends the use of artificial intelligence (AI) to automate the contextual insights, relieving researcher burden. Crucially, this approach also aims to preserve participant privacy by eliminating the need for manual review of all footage from free-living environments and anonymizing sensitive objects within specific areas of the frame.

In this thesis contemporary AI-based computer vision algorithms are used for segmentation, object detection and scene classification. The thesis demonstrates that the approach taken can provide accurate and robust automated contextual information for IMU-based gait assessment to better inform fall risk in the lab and during free-living. The methodology can undertake the necessary requires of contextualisation such as, detect and contextualise objects and hazards in diverse and complex scenarios, classify different terrain types and segment them from the background, anonymise sensitive objects to protect privacy, and synchronise the video data with corresponding IMU data to provide a harmonious and thorough understanding of gait to inform fall risk. The methodology when deployed showcased ability to contextualise anomalous gait data with observed higher asymmetry and variability measures typically indicative of higher fall risk present within select participants. However, with additional context from the proposed model this suggested a completely natural reaction to external hazards, that in previous non-contextualised studies may be treated as an intrinsic Parkinsonian factor. The efficient and automated contextualization to emerging gait data in an ethical manner fills a gap that no previous studies have thus far been unable to address. Of significant importance is the deployment in free-living settings (i.e., people in their own home) and has potential to improve personalised approaches to fall risk assessment to better enable ageing in place and safety in the home.
Date of Award27 Jun 2024
Original languageEnglish
Awarding Institution
  • Northumbria University
SupervisorAlan Godfrey (Supervisor), Sam Stuart (Supervisor), Peter McMeekin (Supervisor) & Richard Walker (Supervisor)

Keywords

  • deep learning
  • AI
  • computer vision
  • Parkinson’s
  • Gait

Cite this

'