Key questions: Social media moderation and inciting violence online
Support trulyindependent journalismFind out moreCloseOur mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.Whether $5 or $50, every contribution counts.Support us to deliver journalism without an agenda.Louise ThomasEditorThe role of social media in the violence and disorder on Britain’s streets has become a key issue, with the moderation and regulation of platforms coming under scrutiny.Misinformation spreading online in part helped sparked the riots, and now people are being arrested and charged for inciting hatred or violence through social media platforms.Here is a closer look at how social media content moderation currently works, how posting hateful material can be a crime and how regulation of the sector could change moderation going forward.– How do social media sites moderate content currently?All major social media platforms have community rules that they require their users to follow, but how they enforce these rules can vary depending on how their content moderation teams are set up and how they carry out that process.Most of the biggest sites have several thousand human moderators looking at content that has been flagged to them or has been found proactively by human staff or software and AI-powered tools designed to spot harmful material.– What are the limitations as it stands?There are several key issues with content moderation in general, including: the size of social media makes it hard to find and remove everything harmful posted; moderators – both human and artificial – can struggle to spot nuanced or localised context and, therefore, sometimes mistake the harmful for the innocent; and moderation is heavily reliant on users reporting content to moderators – something which does not always happen in online echo chambers.Furthermore, the use of encrypted messaging on some sites means not all content is publicly visible and can be spotted and reported by other users. Instead, they rely on those inside encrypted groups reporting potentially harmful content.In many instances, social media platforms are taking action against posts inciting or encouraging the disorder (PA) More