TY - JOUR
T1 - Toward enhanced free-living fall risk assessment
T2 - Data mining and deep learning for environment and terrain classification
AU - Moore, Jason
AU - Stuart, Samuel
AU - McMeekin, Peter
AU - Walker, Richard
AU - Nouredanesh, Mina
AU - Tung, James
AU - Reilly, Richard
AU - Godfrey, Alan
N1 - Funding information: This research is co-funded by a grant from the National Institute of Health Research (NIHR) Applied Research Collaboration (ARC) North East and North Cumbria (NENC). This research is also co-funded by the Faculty of Engineering and Environment at Northumbria University.
PY - 2023/8/4
Y1 - 2023/8/4
N2 - Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.
AB - Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.
KW - Ambulatory gait analysis
KW - Computer vision
KW - Convolutional neural networks
KW - Terrain classification
KW - Wearables
UR - http://www.scopus.com/inward/record.url?scp=85166558897&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/22d65f01-b23b-3796-ac56-7222bc45ecea/
U2 - 10.1016/j.ibmed.2023.100103
DO - 10.1016/j.ibmed.2023.100103
M3 - Article
AN - SCOPUS:85166558897
SN - 2666-5212
VL - 8
JO - Intelligence-Based Medicine
JF - Intelligence-Based Medicine
M1 - 100103
ER -