DeepWE: A Deep Bayesian Active Learning

Zhao Huang, Stefan Poslad, Qingquan Li, Bisheng Yang*, Jizhe Xia, Bang Wu, Zhaoliang Luan, Yonglei Fan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Waypoint estimation (WE) has a wide range of applications for indoor walkers, such as fire rescue and navigation to find exit doors, lifts, or stairs as examples of waypoints, etc. Data-driven WE has been on the rise with advancements in deep learning algorithms. The current WE methods, however, face two challenges. On the one hand, most waypoint detection approaches rely on visual sensors, hence, their estimation performance is limited by light when collecting visual data. On the other hand, data-driven methods necessitate a large number of labeled data to train a WE model, which significantly increases the time spent manually marking labels. Targeting the above two challenges, our work first proposes a novel deep Bayesian active learning waypoint estimator for indoor walkers (DeepWE) based on human activity recognition (HAR). This estimates six indoor waypoints through walkers’ daily activities due to the strong correlation between human activities and waypoints. First, an initial DeepWE model is developed using a Bayesian ensembled convolutional neural network (B-CNN) using the accelerometer and gyroscope data. Then, active learning is employed to query the most formative samples from pool points with four acquisition functions, and only these queried samples are labeled manually. Finally, the initial DeepWE model is updated from this labeled data using an incremental learning algorithm. Empirical results on two publicly available USC-HAD and OPPORTUNITY data sets show DeepWE performs a considerable accuracy boost for WE, with a substantial amount of acquired pool points reduction (more than 40%).
Original languageEnglish
Pages (from-to)9738-9752
Number of pages15
JournalIEEE Internet of Things Journal
Volume10
Issue number11
Early online date6 Jan 2023
DOIs
Publication statusPublished - 1 Jun 2023
Externally publishedYes

Cite this