Intelligent Environments (IEs) are becoming an integral part of our everyday lives. IEs are living environments – both indoors and outdoors - fitted with sensors and actuators to interpret our behaviour, intentions and goals to provide intelligent services.
Better functionality and better user experiences depends on the accuracy of sensor-based activity and behaviour recognition from ambient and wearable sensors. This is necessary for many applications, especially those that involve ambient assisted living in smart homes such as health monitoring or behavioural interventions.
In a traditional approach, building a robust and meaningful classifiers often relies on getting the right high-quality, carefully annotated, dataset and the right classifier model to optimize the dataset before applying it on new data.
However, IEs are characterised by a constantly changing environment whether in terms of materials (for example ,when devices come and go), the physical environment (in mobile environments), users (for example, new staff or visitors) or task (for example new activities).
Current approaches have made substantial contributions to the field of multi-user, concurrent and unknown activity recognition, but these tasks still remain challenging, especially when a model learned for one environment is applied to another.
Another challenge is the lack of high-quality annotated sensor data which most existing machine learning algorithms use. These are extremely costly to acquire and are specific for each new task and environment with the risk of biasing the classifiers. Hence, traditional off-line machine learning is often not suitable for real-world applications.
In this special issue we seek novel research contributions on sensor-based HAR (Human Activity Recognition) and HBR (Human Behaviour Recognition) which leverage on-line optimisation techniques including reinforcement, lifelong learning, zero-shot learning, federated learning, interactive learning.
Papers looking at recent pure end-to-end deep models are welcome, as are papers that describe other models, such as the hybridisation between symbolic reasoning (common sense, descriptive logic) and statistic/neural reasoning.
Papers should contain original work that looks at intelligent systems that not only fuse on accuracy but also provide deep analysis of the results and provide insight on the task. For this reason, papers that focus on benchmarking HAR and HBR models in real world settings are especially welcome.
Topics of interest include, but are not restricted to:
Wearable computing and wearable sensing
Machine learning for sensor-based HAR and HBR
Knowledge acquisition, representation and reasoning for intelligent environments
Fusion of Expert Knowledge and Machine learning
Adaptive HAR and HBR to new situations and sensors
Unsupervised acquisition of HAR and HBR models
User-in-the-loop acquisition of HAR and HBR models
Transfer learning, lifelong learning, federated learning and reinforcement learning for HAR and HBR
Task, Corpus and evaluation methods for bench-marking adaptive HAR and HBR
Analysis of biases in HAR and HBR models
Context aware interactive or dialogue systems in intelligent environments
Crowd-sourcing and participatory sensing for HAR and HBR modelling
Performance evaluation of sensor-based HAR and HBR models
To discuss a contribution, please contact the special issue editors at
Submissions should be original papers and should not be under consideration for publication elsewhere. Extended versions of high-quality conference papers that are already published at relevant venues may also be considered as long as the additional contribution is substantial (at least 30% new content). Authors who have accepted papers at the Intelligent Environment conference (IE 2021)
During the first submission step in Editorial Manager select Original article as the article type. In further steps you should confirm that your submission belongs to this special issue by choosing the special issue title from the drop-down menu.
All papers will be peer-reviewed.
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (