Antonela Tommasel ISISTAN CONICET/UNCPBA, Argentina
Arkaitz Zubiaga Queen Mary University of London, United Kingdom
Social media platforms have become an integral part of everyday lives and activities of most people, providing new forms of communication and interaction. These sites allow users to freely share information and opinions (in the form of photos, short texts and comments) as well as to promote the formation of links and social relationships (friendships, follower/followee relations). One of the most valuable features of social platforms is the potential for the dissemination of information widely and rapidly. The adoption of social media, however, also exposes users to risks, giving rise to what has been referred to as online harms.
Online harms are also widespread on social media and can have serious damaging effects on individuals and society at large. Different forms of online harms include, inter alia, the distribution of false and misleading content (such as hoaxes, conspiracy theories, fake news and even satiric content), harmful content such as abusive, discriminatory, offensive and violence-inciting comments, or the augmentation of societal biases and inequalities online. The proliferation of harms online has become a serious problem with several negative consequences, ranging from public health issues to the disruption of democratic systems (Online Harms White Paper, 2019). Identification of harmful content online has however proven difficult, with not only the scientific community but also social media platforms and governments worldwide calling for support to develop effective methods.
Online harm-aware mechanisms based on intelligent methods become essential to mitigate the negative effects of this unwanted content, preventing it from reaching large audiences and to be amplified by social media. Although intelligent techniques in the intersection of machine learning, natural language processing and social computing have made substantial advances in detecting and modelling the propagation of harmful content in social networks, there are a number of open problems in this area. Among others, concerns have been raised about the potential social biases and unfairness of intelligent systems tackling the detection of online harms, which also stems from the lack of explainability and transparency of learned models.
The aim of this issue is to bring together a community of researchers interested in tackling online harms and mitigating their impact on social media. We seek novel research contributions on misinformation- and harm-aware intelligent systems assisting users in making informed decisions in the context of online misinformation, hate speech and other forms of online harms. Expected contributions are original works on intelligent systems that can circumvent the negative effects of online harms in social media, by improving detection methods and diffusion modelling as well as addressing concerns such as social biases, fairness and explainability.
Topics of interest include, but are not restricted to:
Analysis and prevention of misinformation effects (e.g. echo-chambers, filter bubbles)
Hate speech detection and countermeasures (abusive language, cyberbullying, etc.)
Modeling diffusion and propagation of harmful content in social media
Social bias detection and mitigation in data/algorithms
Fairness and transparency in intelligent systems
Explainability and transparency
Dataset collection and annotation methodologies
Evaluation methodologies and metrics
Intervention and content moderation strategies
Impact of online harms in search and recommender systems
Applications and case studies of misinformation- and harm-aware intelligent systems
Submissions should be original papers and should not be under consideration for publication elsewhere.
Extended versions of papers from relevant conferences and workshops are invited as long as the additional contribution is substantial (at least 30% new content).
Authors should follow the formatting and submission instructions for Personal and Ubiquitous Computing at
During the first submission step in Editorial Manager select Original article as the article type. In further steps, you should confirm that your submission belongs to this special issue by choosing the special issue title from the drop-down menu.
All papers will be peer-reviewed.
Submission deadline: January 31, 2021
Author notification: March 31, 2021
Revised papers due: April 30, 2021
Final notification: June 30, 2021
To discuss a possible contribution, please contact the special issue editors at