Authority and misinformation in the process of COVID-19 sensemaking
Team Members
Emillie de Keulenaar, Ivan Kisjes and Carlo de Gaetano
based on previous work by Emillie de Keulenaar, Ivan Kisjes, Rory Smith, Carina Albrecht, Eleonora Cappuccio and visualisations by Guillermo Appolinário
Contents
1. Introduction
Phenomena like COVID-19 have been characterized for their uncertainty (Yong, 2020). As new information on the epidemiological nature of the disease and its impact on public safety evolves, so do claims on which objective facts constitute it. Heads of state, health organizations and the public have been frequently divided on such claims, such as whether asymptomatic people and children can contaminate others, whether one should use a mask, or if children can be contagious (Iati
et al., 2020; O’Leary, 2020). This inconsistency can reportedly erode trust between the public, governments and (health) organizations (Starbird, 2020), possibly leading the former to rely on increasingly diverging understandings of the pandemic (Bordia and Difonzo, 2004; Bostrom
et al., 2015; Starbird
et al., 2016).
In this midst of this uncertainty, social media and reference platforms have been tasked with ensuring that their users maintain consensus (Skopeliti and Bethan, 2020). Since the early months of 2020, Google Search,
YouTube, Facebook, Twitter and Reddit have set up centralized access points to information related to COVID-19, all provided by local and "authoritative sources" (Skopeliti and Bethan, 2020). Such efforts respond for requests to ramp up moderation of falsehoods and other "problematic information" when maintaining public consensus is vital for citizens’ safety. Though some stakeholders continue to demand more radical platform redesigns (Dwoskin, 2020), more modest measures include: temporarily disabling the personalisation of Newsfeeds, flagging contents (tweets, posts, videos) that disseminate contested claims (Lyons, 2020), demoting "borderline" or suspicious contents like conspiracy theories and raising “authoritative contents” on top of search and recommendation results (The
YouTube Team, 2019), or deleting materials that pose a danger to public health, such as anti-vaccination or alternative medication (
YouTube, 2020).
While content moderation has never been an oddity to platforms — Gillespie (2018) goes so far as to define platforms
by and
for their moderation — many see the above-mentioned measures as concomitant to censorship, or bias at the very least (Jiang, Robertson and Wilson, 2019; Lee, 2020). Arguably, moderation requires that platforms actively intervene in ongoing deliberations around what constitutes reality, by sorting, ranking and deleting information that steps outside the boundaries of common sense (Rieder, 2017). And with contents produced by an already polarized user base, moderation can become "essentially contested" (de Laat, 2012). To what extent does moderating misinformation then help re-establish public consensus? Which substantiations of objectivity and factuality do platforms support over time?
This paper traces Twitter's moderation of disputed COVID-19 misinformation from March to June 2020. Using a sample of 3 million Tweets that mention #covid or #coronavirus, I combine close and "distant" reading techniques (namely, natural language processing) to assess how information about COVID-19 transmission, prevention and treatments is disputed between local American and international authoritative sources (U.S. President Donald Trump, the Centre for Disease Control and Prevention, the National Institutes of Health and the World Health Organisation, respectively) and their Twitter audiences. I then assesses how Twitter intervenes such disputes its moderation of COVID-19 misinformation, namely through the labeling, suspension and deletion of Tweets that mention non-authoritative claims. I first chart a brief web history of Twitter's "COVID-19 Misleading Information Policy" (Twitter, 2021) with the Internet Archive's Wayback Machine, and then scrape moderation metadata off of non-authoritative Tweets using Selenium.
3. Research Questions
- How did Twitter's content moderation affect ongoing debates amongst authoritative sources and users on what constitute COVID-19 treatments, protection, ?
- How did the COVID-19 crisis affect Twitter’s content moderation policies?
4. Literature review
Social and knowledge platforms like Facebook, Twitter,
YouTube and Wikipedia are perceived as modelling themselves after “nominally open societies” (Gillespie, 2018) for distinguishing themselves as democratic, “participatory” alternatives to mass media (Langlois, 2013). Following myriad controversies – harassment campaigns (Jeong, 2019), fake news-mediated disseminations of conspiracist narratives (Venturini et al., 2018), and inter-ethnic violence up to and including genocide (Mozur, 2018) – these platforms have however taken a more proactive role of moderating their user base. This has manifested in the implementation of top-down anti-misinformation and hate speech measures (Gordon, 2017), including flagging and eventually suspending local authorities (Brazilian and U.S. Presidents Jair Bolsonaro and Donald Trump), demoting problematic contents in recommendation and search results (Constine, 2019), banning users linked to hate speech and conspiracy theories, or redirecting them to educational material in an effort to “deradicalize” them (The Redirect Method, 2016).
So far, however, research on content moderation and misinformation has focused primarily on content deletion, or "deplatforming". Considering that figures like Breitbart columnist Milo Yiannopolous or conspiracy theorist Alex Jones mellowed their language and their audience has thinned (Rogers, 2020), such studies have given reason to believe that deplatforming is an effective strategy for policing hate speech (Chandrasekharan
et al., 2020). This also applies to misinformation: journalistic reports and academic research have pointed to the dramatic loss of public attention for deplatformed contents, including
Plandemic, a documentary on how COVID-19 was a planned hoax (Frenkel, Decker and Alba, 2020). Such studies are supported by a growing number of technical and legal analyses that argue for a sensible redesign of speech moderation in private companies, by for example "democratising" such techniques with collaborative or participative moderation (De Gregorio, 2020), delegating them to civil society (Elkin-Koren and Perel, 2020), or better detecting
context to support decisions to sanction, quarantine, or delete user-generated contents (Wilson and Land, 2020).
In this sense, a majority of studies have focused on the moderation of extreme contents. This has tended to position scholarly debates in platform critique, an oftentimes policy-driven assessment of what (more) platforms could do to stifle or even encourage the production of misinformation. In comparison, relatively few studies have considered the effects or politics of moderating openly disputed or unknown matters, such as the epidemiological nature of a virus like COVID-19. More generally, some studies locate content moderation within historical debates on inherently normative disputes, such as determining what can and cannot be said within ongoing battles of ideas as to what constitutes facts, truth and offences to religious, race, gender, and other subjects. For this reason, many have noticed an important change from platforms's inclinations to be nominally neutral “intermediaries” of public speech, to more proactive arbiters of normative conditions for expression (Gillespie, 2018). Platforms' interventions in public debate has solidified the perception that content moderation is “essentially contested” (De Laab, 2012, 125), pushing users to create expanding alternative or “alt-tech” infrastructures with looser speech affordances.
But while alt-tech infrastructures target decidedly extreme contents (child pornography, hate speech, violence), disputed information such as treatments, preventions and the forms of transmission of COVID-19 pose a more complex challenge to content moderation. Unlike extreme contents, claims around COVID-19 have been disputed by both authoritative and non-authoritative users
(Iati et al., 2020; O’Leary, 2020). The U.S. public alone has seen the late U.S. President Donald Trump frequently contradict the CDC and NIH, other equally authoritive institutions. Platforms' prioritisation of the World Health Organisation as an authoritative source has further exposed such divergences on an international level. Content moderation would then imply detecting of misinformation as "non-authoritative" claims, and qualifying the authority of authoritative sources. The latter implication has been especially visible in January of 2021, when Twitter took the unprecedented step to label, suspend and eventually ban then-U.S. President Donald Trump for repeated violations of its policies against the
Glorification of Violence and for
Electoral Integrity.
In this context, this study assesses how Twitter's content moderation policies and practices have framed disputed claims amongst authoritative and non-authoritative users. This implies studying COVID-19 misinformation as the product of poor consensus between authoritative sources and their social media audiences. Drawing partly from studies on collective sensemaking
(Dailey & Starbird, 2015; Krafft et al., 2017) and rumors
(Caplow, 1946; Shibutani, 1966), I use close reading and natural language processing to compare claims on COVID transmission, prevention and treatments by U.S. “authoritative sources” (defined as its head of state and principal health organisations) and “audiences”, defined as the users who engage with or refer to the former on Twitter. I propose that, aside from corrective measures to ban misinformation, authorities and (social) media platforms could invest in affordances to facilitate consensus around disputed matters across (
Implication 2).
5. Methodology and initial datasets
The method of this study is two-fold. Based on a collection of millions of Tweets, I first parse, analyse and visualise diverging claims on COVID-19 transmission, prevention and treatments between U.S. authoritative sources and their respective audiences. I then look at how Twitter moderated disputed claims by first consulting content moderation policies designed for COVID-19 misinformation, and then obtaining moderation metadata from Tweets containing disputed contents.
Definitions
The U.S. has at least two channels responsible for communicating authoritative information on COVID-19: its head of state and its health departments or disease prevention agencies (
Annex: Figure 1). Because Twitter prioritises the World Health Organisation as an authoritative source, I also captured data from that organisation’s international and American offices. I refer to heads of state and public health organisations as “authoritative sources”, and the W.H.O., health ministries, departments and disease prevention agencies as “public health organisations”. By “audiences”, I refer to users who respond to these actors or mention them through Twitter replies or as mentions of these actors’ website domains.
By “claims” about the coronavirus, I mean information that can be confirmed as true or refuted as false by governments and health organisations. I focused on:
- how the virus is transmitted;
- available treatments;
- preventive methods;
Data collection
For data collection on Twitter, I used Rieder and Borra’s Twitter Capture and Analysis Tool
(Borra & Rieder, 2014), which collects tweets based on a chosen set of queries. These queries were “covid”, “coronavirus” and “WuhanVirus” and captured a total of 61,498,037 tweets from January 26th to July 7. Of those, I extracted 910 tweets from government and public health organisations and 496,166 replies and mentions of official domains. In addition to Tweets, I also collected claims on COVID-19 transmission, prevention and treatment by CDC, NIV and Donald Trump on their official websites (
cdc.gov,
nih.org,
whitehouse.gov). Information on Twitter's COVID-19 misinformation moderation policies came primarily from two sources: Twitter's blog on COVID-19, and its "COVID-19 Misleading Information Policy". From these, I was able to note what information they target and how they moderate it (suspension, labeling, deletion, etc.). I then obtained moderation metadata from Tweets that mentioned disputed claims by using Selenium, a web interface scraper.
Parsing claims inductively and deductively
To map divergences in government, public health organisation and “audience” statements about COVID-19, I sought to capture and compare the widest possible range of claims about the transmission, prevention and treatment of the virus. I captured both
true and
false statements with a deductive and inductive approach. The deductive approach consisted in consulting secondary sources on COVID-19 misinformation, such as Wikipedia (Annex: Figure 3). The inductive approach consisted in manual and semi-automatic capture of claims. This involved reading Tweets and (authoritative or official) websites that contained the words “transmission”, “prevention” or “protection” and “treatment” or “cure”. I also generated word embeddings and bigrams for the queries “transmission”, “prevention” or “protection” and “treatments” or “cure” to find other relevant terms. I obtained a total of 48 words for transmission, 83 for treatments (2,739 with medications extracted from
drugbank.ca) and 79 for prevention (Annex: Figure 4).
Coding and filtering claims in Tweets and official websites
I split and detected sentences per topic as follows:
- Transmission: sentences mentioning “infect”, “transmi”, “transfer”, “contag”, “contamin”, “catch”, or “spread”;
- Prevention: sentences mentioning “prevent”, “protect”;
- Treatment: sentences mentioning “treatment”, “cure” and “vaccine”.
For more complex queries such as whether the virus is airborne or whether one should wear masks, I manually coded every sentence that mentioned both “wear” and “mask” for the masks query and “airborne” and either “aerosol” or “droplet” for the “airborne” query. For sentences mentioning COVID-19 transmission, coding meant annotating claims that (1) the virus
is or
is not airborne, and more specifically that (2) it spread through
droplets or
aerosols. For those mentioning protection, it implied annotating claims that (1) the general public
should and
should not wear masks (“should wear”, “should not wear” respectively) and (2) who should be wearing masks (caregivers, essential workers, travelers…). In many cases, claims were far beyond simple binaries, and if frequent, required a category of their own.
I then manually coded the information retrieved from government and health authorities' official webpages on whether they provided any instructions or claims about transmission, treatments and use of masks that were inconsistent among them. I used the Internet Archive to track changes in the information in these webpages from January 2020 to July 2020. For each page with any information about transmission, treatments or use of masks, I coded them by date of change accordingly. For transmission, I coded if they agree if the transmission is possible through airborne or aerosol, contact, droplet, fluid or animals. For treatments, I coded if they recommend chloroquine, hydroxychloroquine or ibuprofen. For masks, I coded if they recommend wearing a mask or face-covering in public, wear a mask if one has symptoms, wear a mask if around sick people.
Coding and filtering claims in social media textual data: limitations
Twitter audience responses contain a large amount of retweets of claims made by authoritative sources. Because of this, I also included Tweets that do not necessarily reply or mention authoritative sources by are geolocated in the U.S. Geolocation is included in T-CAT's Tweet metadata.
6. Findings
1. Authoritative sources and their audiences contradict each other most on undetermined facts, such as COVID-19 treatments
The most controversial topics are those that audiences have least information on: cures and treatments for COVID-19 (
Figure 1). While authoritative sources mention “no treatment” and “vaccines” not being available — proposing to rely on “infection prevention” — audiences mention a myriad remedies, including ethanol, remdesivir, zinc and vitamin C.
Figure 1. Treatments mentioned by authoritative sources (Tweets and website data) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "
whitehouse.gov"))
The use of ethanol, honey, lemon, cannabis, cocaine, colloidal silver and lopinavir are occasionally debunked by authoritative sources (
Figure 1); ethanol, for example, in early February and April, both before and after audiences mention it. I found that the White House expressed doubts on the efficacy of honey and lemon after these ingredients gained traction amongst audiences. The same cannot be said about remdesivir, chloroquine, hydroxychloroquine, dexamethasone, prednisolone and tamiflu, about which authoritative sources mention ongoing research and testing. With the exception of prednisolone, all of these ingredients generate continuous audience engagement.
Authoritative sources often stress the uncertain nature of research on COVID-19
(Bostrom et al., 2015, p. 633). Authoritative claims do not just “debunk” false knowledge, but also express uncertainty on transmission, treatments and prevention. “Chloroquine” and “hydroxychloroquine”, for example, are first presented as possible treatments for COVID infections by audiences; only later do authoritative sources follow (
Figure 1). Conversely, authoritative sources question the “airborne” nature of COVID transmissions, specifying that the virus can be transmitted by coughed “droplets” (
Figure 4). Audiences don’t always distinguish the technical term “airborne” from anything that can spread through flight, conflating “droplet transmission” with “airborne transmission”.
Authoritative sources and their audiences usually refer to different modes of COVID-19 transmission. Both mention close contact, coughing, sneezing and touch as modes of transmission, while audiences refer to alternatives like “mosquito”, “petrol”, “radiation” and “chicken”. Others, like “5G”, are debunked by authoritative sources only after the topic gains significant traction among audiences in early April.
Figure 2. Modes of transmission mentioned by authoritative sources (Tweets and websites) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "
whitehouse.gov"))
2. Audiences are divided around contradicting claims by authoritative sources
Audiences and official channels contradict each other on topics for which there is less scientific consensus, such as airborne transmission (
Figure 3). Notable, here, are the amount of contradictions
among authoritative sources, which explain in part notable public confusion on airborne transmission
(Achenbach & Johnson, 2020; Lewis, 2020; Mandavilli, 2020). While the World Health Organization expresses uncertainty throughout February, the White House confirms this form of transmission in late March, causing a cascade of similar statements among users. The World Health Organisation then states that airborne transmission can indeed occur — but “within one meter”. One tweet by the World Health Organisation’s West Pacific regional office does however overturn this claim, and is overwhelmingly retweeted by audiences in late March (
Figure 3).