Toward enhanced free-living fall risk assessment: Data mining and deep learning for environment and terrain classification

Jason Moore, Samuel Stuart, Peter McMeekin, Richard Walker, Mina Nouredanesh, James Tung, Richard Reilly, Alan Godfrey*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
86 Downloads (Pure)

Abstract

Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.
Original languageEnglish
Article number100103
Number of pages10
JournalIntelligence-Based Medicine
Volume8
Early online date2 Aug 2023
DOIs
Publication statusPublished - 4 Aug 2023

Keywords

  • Ambulatory gait analysis
  • Computer vision
  • Convolutional neural networks
  • Terrain classification
  • Wearables

Cite this