Leveraging Self-Supervised Learning for Human Activity Recognition with Ambient Sensors
Abstract
Human activity recognition (HAR) using ambient sensors has emerged as a promising approach to telemonitoring daily activities and enhancing the elderly quality of life. Deep learning models have demonstrated competitive performance in HAR on real-world datasets. However, acquiring large amounts of annotated sensor data for extracting robust features is costly and time-consuming. To overcome this limitation, we propose a novel model based on the self-supervised learning framework, SimCLR, for daily activity recognition using ambient sensor data. The core component of the model is the encoder module, which consists of two convolutional layers followed by a long short-term memory (LSTM) layer. This architecture allows the model to capture both spatial and temporal dependencies in the sensor data, enabling the extraction of informative features for downstream tasks. Through extensive experiments on three CASAS smart home datasets (Aruba-1, Aruba-2, and Milan), we showcase the superior performance of the model in semi-supervised learning and transfer learning scenarios, surpassing state-of-the-art approaches. The findings highlight the potential of self-supervised learning in extracting valuable information from unlabeled sensor data, reducing costly annotation efforts for real-world HAR applications.
Citation
Hui Chen, Charles Gouin-Vallerand, Kévin Bouchard, Sébastien Gaboury, Mélanie Couture, Nathalie Bier, and Sylvain Giroux. 2023. Leveraging Self-Supervised Learning for Human Activity Recognition with Ambient Sensors. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good (GoodIT '23). Association for Computing Machinery, New York, NY, USA, 324–332. https://doi.org/10.1145/3582515.3609551