TY - JOUR
T1 - Cross-Domain Activity Recognition Using Shared Representation in Sensor Data
AU - Hamad, Rebeen
AU - Yang, Longzhi
AU - Woo, Wai Lok
AU - Wei, Bo
PY - 2022/7/1
Y1 - 2022/7/1
N2 - Existing models based on sensor data for human activity recognition are reporting state-of-the-art performances. Most of these models are conducted based on single-domain learning in which for each domain a model is required to be trained. However, the generation of adequate labelled data and a learning model for each domain separately is often time-consuming and computationally expensive. Moreover, the deployment of multiple domain-wise models is not scalable as it obscures domain distinctions, introduces extra computational costs, and limits the usefulness of training data. To mitigate this, we propose a multi-domain learning network to transfer knowledge across different but related domains and alleviate isolated learning paradigms using a shared representation. The proposed network consists of two identical causal convolutional sub-networks that are projected to a shared representation followed by a linear attention mechanism. The proposed network can be trained using the full training dataset of the source domain and a dataset of restricted size of the target training domain to reduce the need of large labelled training datasets. The network processes the source and target domains jointly to learn powerful and mutually complementary features to boost the performance in both domains. The proposed multi-domain learning network on six real-world sensor activity datasets outperforms the existing methods by applying only 50% of the labelled data. This confirms the efficacy of the proposed approach as a generic model to learn human activities from different but related domains in a joint effort, to reduce the number of required models and thus improve system efficiency.
AB - Existing models based on sensor data for human activity recognition are reporting state-of-the-art performances. Most of these models are conducted based on single-domain learning in which for each domain a model is required to be trained. However, the generation of adequate labelled data and a learning model for each domain separately is often time-consuming and computationally expensive. Moreover, the deployment of multiple domain-wise models is not scalable as it obscures domain distinctions, introduces extra computational costs, and limits the usefulness of training data. To mitigate this, we propose a multi-domain learning network to transfer knowledge across different but related domains and alleviate isolated learning paradigms using a shared representation. The proposed network consists of two identical causal convolutional sub-networks that are projected to a shared representation followed by a linear attention mechanism. The proposed network can be trained using the full training dataset of the source domain and a dataset of restricted size of the target training domain to reduce the need of large labelled training datasets. The network processes the source and target domains jointly to learn powerful and mutually complementary features to boost the performance in both domains. The proposed multi-domain learning network on six real-world sensor activity datasets outperforms the existing methods by applying only 50% of the labelled data. This confirms the efficacy of the proposed approach as a generic model to learn human activities from different but related domains in a joint effort, to reduce the number of required models and thus improve system efficiency.
KW - Activity recognition
KW - cross-domain learning
KW - deep learning
KW - sensor data
KW - temporal evaluation
UR - http://www.scopus.com/inward/record.url?scp=85131766013&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2022.3178083
DO - 10.1109/JSEN.2022.3178083
M3 - Article
SN - 1530-437X
VL - 22
SP - 13273
EP - 13284
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 13
ER -