Contemporary approaches to gait assessment use wearable devices within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.
|Title of host publication||IEEE BHI-BSN 2022|
|Publication status||Accepted/In press - 20 Jul 2022|
|Event||IEEE-EMBS International Conference on Biomedical and Health Informatics - Ioannina, Greece|
Duration: 27 Sep 2022 → 30 Sep 2022
|Conference||IEEE-EMBS International Conference on Biomedical and Health Informatics|
|Abbreviated title||IEEE BHI-BSN|
|Period||27/09/22 → 30/09/22|