Book abstract
How well do social media platforms moderate content? It is a deceptively hard question to answer given much content is removed prior to publication and further moderation ranges from light labelling to subtle down ranking, which is referred to in the vernacular as shadow banning. Given the difficulties in reconstructing the scene of content removal and visibility reduction, up until now much of the research has relied upon user experiences as well as platform self-reporting.
This book details digital research methods to study content moderation, utilising the traces left on the platforms after removal as well as content performance measures and user experience simulations, also known as research personas. It examines moderation histories as well as current practice across a series of platforms and search engines: X/Twitter, YouTube, Facebook, Instagram, TikTok, Telegram, Pornhub, Amazon, Google Play, Apple App Store, Google, Bing and the chatbots, Microsoft’s Copilot, OpenAI’s ChatGPT, and Google’s Gemini. It focuses on the problematic zones per platform (e.g., X/Twitter’s uneven implementation of policies, Facebook’s data lacuna, the sexualisation of children on Instagram, illegal trade on Telegram, malicious sounds on TikTok and election information in chatbots), concluding with a discussion of a sustainable moderation philosophy and its place on the public agenda.
I | Attachment | Action | Size | Date![]() |
Who | Comment |
---|---|---|---|---|---|---|
![]() |
cover_book richard12mockup.jpg | manage | 321 K | 23 Jun 2025 - 08:49 | RichardRogers | |
![]() |
Content_Moderation_UvA_23June_2025_opti.pdf | manage | 7 MB | 23 Jun 2025 - 09:13 | RichardRogers |