The sad fate of small Facebook audiences: USA

Team Members

Nora Svensson Hahr
Jesper Brinkhof
Fem Akcan
Cong Hung Dinh


1. Introduction

Download this report with images.

In 2018, Facebook released its ad library, allowing users to search among the ads posted on the social media platform. Much attention has been devoted to analyzing the content and behavior of the actors spending the most on advertisements (e.g. Webb & John, 2021; Global Witness, 2022; Korn, 2022; Hamilton, 2023; Michael, 2023). These users have been found to share political misinformation and extreme content. However, little attention has been paid to small spenders and ads reaching a limited audience. This gap exists despite the fact that the library of small spenders is vast. Preliminary research suggests that much of the content, shared through these smaller advertisements, frequently contains misinformation and is funded by fraudulent actors. In this project, we aim to investigate the content and users engaged in small ad spending on Facebook. We focus on advertisements targeted to the US. This specialization is motivated both by the country’s widespread Facebook usage and the upcoming national election. It provides a particularly relevant political backdrop to the project.

We first map out frequently occurring content among small ad spenders in the US over a three-month period. Following this analysis, we identify a specific insurance scam particularly prevalent within the dataset and map out how this scam operates, its aesthetics and vernaculars, and its target audience. The investigation shows that over a three-month period, thousands of ads distributed over a few hundred Facebook accounts have garnered between 23,7 - 34,6 million impressions and have provided Facebook with approximately 669,500 - 1,470,681 USD in ad revenue.

2. Research Questions

The research question was designed following an initial investigation of the dataset, identifying the overrepresentation of lead generation scams in the ad library. Therefore, the research question was formulated as: How does lead generation operate through scam/misinformation Facebook advertisements in the US? We expected to find similar patterns in both operation and aesthetics amongst the scam accounts. Furthermore, we hypothesized that the scams would target various minority groups living under economic stress, as such demographics presumably would be more eager to accept a scam offer.

3. Methodology and initial datasets

There were two primary data sets throughout our investigation. The first dataset was the ad library report used for the content analysis of advertisements with an investment of less than $100. The dataset was accessed through the Meta ad library. It contained all advertisements regarding social issues, elections, or politics posted to Facebook on January 6th, 2024, and 90 days back in time. Before the analysis, all advertisements for which the ad spending exceeded $100 were removed. The second dataset was created after the case study of the $6400 subsidy scam had been identified based on the initial analysis. This dataset was also accessed through the Meta ad library by searching for all ads containing “$6400” as the keyword.

3.1 Meta Ad Library Report

Our research defined the investigated country as the United States (see Figure 1). First, we chose “Last 90 days” and “Last day” as the options for downloading the full report about Facebook ads across the US states (see Figure 2). We also extracted data from the subset of this report about Facebook ads in each state. Considering the collected datasets exported as CSV files, the crucial information about multiple Facebook pages included the IDs, names, disclaimers, amount spent, and number of available ads.

In these files, we filtered Facebook ads with the condition of “≤100”, referring to the ad amount spent lower than or equal to $100. Then, we selected four typical states in the US: California, Texas, Arizona, and New Mexico. Compared to other states, advertisers in these states promoted a higher portion of scam and misinformation ads with the amount spent fitting the above filtered condition.

Through the identified case study of the $6400 subsidy scam in our initial datasets, “$6400” was our keyword in searching for relevant Facebook ads from many Facebook pages in the US (see Figure 3). By connecting to the above general datasets, this second type of data collection was crucial to our report because it helped compare threats from scam/misinformation Facebook ads between the amount spent ≤ $100 and the amount spent over $100. However, our search results included Facebook pages from many US states without any selection.

3.2 Content Analysis

Based on our data about Facebook ads in the chosen US states, we performed the content analysis of ad content. According to Lewis et al. (2013), analyzing social media data by using the list of collected samples helped highlight the meaningful content and explore its categories. This benefit also applied to our qualitative and quantitative analysis of Facebook ad content. Next, we created a list by sorting the most notable Facebook pages and websites linked to the analyzed ads (see Figure 4). Any ads connected to invalid Facebook accounts were excluded. By taking an inductive approach, we tried to find the prevalent topics addressed in various problematic Facebook ads in our sorted list. These topics helped define the expected categories of these ads to carry out our analysis.

Through our initial observations (see Figure 5), the case study of $6400 subsidy scam involved ads with similar messages about tricking people into subscribing to the fake benefit programs. Hence, we analyzed ad content from this case study based on the discovered categories of Facebook ads in the above general datasets.

4.3 Visual and Format Analysis

By aligning with the analyzed content of scam/misinformation Facebook ads, we looked into the visuals and formats of Facebook accounts and websites associated with these ads. Adami and Jewitt (2016) showed that the forms of graphics/images, texts, and videos on social media and the web encouraged users to interact with different pages thanks to visual attraction. In our report, we also wanted to investigate changes in Facebook pages’ names, variations in the website templates linked to Facebook ads, and patterns in their photos, videos, or designs. This investigation helped explore how advertisers reused similar or different visuals to duplicate and spread problematic ad content to more audiences.

4.4 Audience Analysis

Within the gathered data about Facebook ads with Facebook pages in the chosen US states and our case study, we examined audiences targeted in these ads. Our research featured demography, gender, and the distribution rate among these audiences. Such information was significant because we wanted to explore which audience group was vulnerable to misleading Facebook ad content. Moreover, it was interesting to see whether advertisers focused on demography or gender to circulate their scam/misinformation Facebook ads. They gradually increased audience engagement with problematic Facebook advertising by making misguided content look reliable.

4.5 Network Analysis

To observe connections between websites clicked from misleading Facebook ads, our report analyzed the network of these websites through Gephi (, n.d.). As seen in Figure 6, we used Table 2 Net to convert our above datasets from the CSV format into GEXF files (Jacomy, 2013). Then, we imported these GEXF files into Gephi to examine multiple nodes and links from the relevant websites. In Gephi, we selected “ForceAtlas 2” as the main layout of this network. This method supported our research goal of interpreting connections between the referred web links in problematic Facebook ad content.

4. Findings

Our initial data set showed different problematic Facebook ad content, comprising personal political campaigns, conservative and far-right media, advocacy groups, and scam/misinformation programs. Among them, we found three prevalent categories. They entailed insurance, healthcare, and benefits (see Figure 7). For Facebook ads that do not belong to these three categories, we classified them as “Others”.

The healthcare insurance and benefits scam/misinformation ads on Facebook involved misleading messages about tricking users into sharing personal information to join their fake campaigns or programs. Regarding the portion of each category of problematic Facebook ad content, “Benefits” was the most highlighted topic. “Insurance” and “Healthcare” combined were the next most common categories. The rest of the misleading Facebook ads promoted other topics (e.g., Finance, Investment). These findings presented advertisers’ focus on creating inappropriate content that gave audiences false information about insurance, healthcare, and benefits services.

4.1 Overview of Scam/Misinformation Facebook Ads in the US
4.1.1 The Operation of Scam/Misinformation Facebook Ads and Lead Generation

Health insurance in the United States is a complex system. Outside of people being insured through their work, we identified three healthcare programs mentioned in ads: The Affordable Care Act (ACA), Medicare, and Medicaid. The Affordable Care Act is a federal law often referred to as Obamacare and covers all states offering subsidies to lower-income individuals (, n.d.). Medicare serves adults above 65+ with health insurance and consists of 4 parts (A to D) with part C related to extra coverage (such as dental) and D for prescription drugs. Both of these parts are run by private companies and are what the scam ads focus on. Finally, Medicaid is a combination of a state and federal program fully concerned with serving low-income, disabled, and other disadvantaged Americans. Individuals can be enrolled in combinations of these programs as well as private insurance. The fact that individuals can have multiple types of insurance and that more people with insurance leads to more profits for companies means that there is a market for what is called lead generation.

Lead generation is the practice of searching for individuals to whom companies could sell (health) insurance (Inbound Medic, 2023). Leads are generated through advertisements and these ads can be run through Facebook. Requirements to run a Facebook ad are extremely low: A payment method, a Facebook page, and a “business” account that can be set up through any Facebook profile. There is no follower or like threshold, minimum page age, or ad number cap. As we observed this was what led to widespread abuse of the system, for example by a page with 1 follower running over 100 ads. These ads followed similar styles and had either a direct call link or a link to (any) webpage, usually featuring names such as or other health-related variations. The fact that the ads were very similar in content and that multiple pages linked to the same website suggested that they were run by a smaller number of individuals. Additionally, lead generation services were offered online which we analyzed as well. The people running the ad campaign got paid in a multitude of ways. Through a Pay-per-call model, where businesses pay per inbound call (Wade, 2023), by selling the forms with personal information filled out to referral companies, and through selling sales opportunities to 3rd parties for robocalls, emails, text messages and so on which the lead generation disclosure at the bottom of their web pages allows them to do. This partner list could be in the thousands as shown on this link.

Because of the fact that with the right profile, individuals might be qualified for some type of insurance, this made the business seem more legitimate, even though all practices deployed were unethical at best and Federal Trade Commission (FTC) violations at worst as shown in a previous case (Federal Trade Commission, 2022).

4.1.2 Audiences Targeted in Facebook Ads of Some Typical Facebook Pages

We decided to investigate audiences via three typical Facebook pages. They also originated from our list of the most notable Facebook pages and websites linked to the analyzed ads across the selected US states. These pages involved “HRBC Insurance” in California, “$6,400 Healthcare Rescue For Americans” in Texas, and “Advantage Benefits Program” in New Mexico.

Due to the limitation of 3 exported files per day about Facebook ads of all the investigated accounts in the Meta Ad Library, we chose the three above pages to make our data collection process suit the duration of the Digital Methods Winter School 2024. Additionally, they were our choices as they promoted more problematic Facebook ad content than other pages in insurance, healthcare, and benefits. Analyzing audiences reached by their ads helped generalize significant results about the demographic and gender distribution.

The 55-64 and 65+ groups accounted for the highest portion with over 50% of the average audience size (see Figure 8). These two groups combined had the most audiences targeted in scam/misinformation Facebook ads of some typical Facebook accounts in the US. Compared to the group of old users, younger age groups were less prevalent with small percentages of them reached by these ads. Unlike age, gender among audiences showed a more equal portion between female and male users (see Figure 9). A few audiences were not identified with proper gender as they belonged to the “Unknown” type. These findings showed that advertisers paid more attention to age than gender in promoting their misguided Facebook ad content.

4.2 The Case Study of $6400 Subsidy Scam

Throughout our analysis, content regarding a $6400 subsidy scam was prevalent. It uses Facebook to advertise a made-up subsidy that is worth $6400. Users are encouraged to apply for the subsidy by clicking on the suggested links in the advertisements or calling particular phone numbers. When these users follow the call-to-action signs in these ads, scammers extract their personal information and pass it on to both referral companies and other third parties.

Numerous sources claimed that the scam was created on March 13th, 2003, as a viral video on Facebook (Ankit, 2024; Jamdade, 2024). However, considering that Facebook was not launched until 2004, this could not be true. Through targeted searches of Google Search Trends and the Meta ad library, we estimated the earliest instances of this scam to April 2023, with a steady increase both in scams and Google searches from October the same year (see Figure 10).

4.2.1 Images from Facebook Profiles

Upon finding and analyzing a number of different Facebook profiles linked to scam/misinformation websites from our case study, an evident pattern could be seen in the way these Facebook profiles, ads, and websites were set up. In order to target their primary audience, which was identified as the elderly, these profiles used strategic yet “simplistic” templates to garner the trust of their target demographic. A plethora of these scam Facebook profiles used “business” profile names and profile pictures (see Figure 11), as well as profiles that appeared to be official government profiles, or ones that could be argued to be loosely tied to government organizations (see Figure 12). These profiles used the American flag, pictures of government officials, as well as strategic profile names such as “American Capital Protection” as an integrity prop to maintain credibility. Through the topic of their target audience, these scam Facebook profiles also strategically use images of elderly people as a method of online deception and exploitation (see Figure 13). This could be understood in a way of “impersonation”, as it put the target audience under the impression that they were dealing with profiles meticulously set up for their benefit. The study also revealed an increased use of images created through generative AI technologies to initiate profiles that conveyed credibility (see Figure 14). Since the use of real images on fake profiles led to legal consequences due to privacy violations, specifically for fraudulent profiles such as the ones investigated during this study, scammers opted to generate “real personas'' to build authenticity with limited drawbacks.

All of these factors were beneficial for problematic Facebook ads to build trust with elderly audiences who were easily deceived into giving away their personal information.

Though challenging to spot AI-generated personas at first glance, the images showcased inconsistencies, notably in the eyes and teeth symmetry, airbrushed appearance, and sophisticatedly blurred backgrounds (see Figure 15). Analyzing the images relied on checking for variability, the background, and the "perfection" found in a particular image. It's crucial to note that many of these images may be retouched after generation to eliminate signs of inauthenticity.

4.2.2 Videos from Scam/Misinformation Facebook Ads

Another factor that came into pivotal play during our research was patterns unveiled in the ad videos posted on Facebook. In order to achieve a level of authenticity, these Facebook ad videos employed various different elements. One of the most important patterns was the obvious use of Stock videos, as well as clips obtained from TikTok and YouTube (see Figure 16).

Different clips that loosely correlated with the narration of the video were cut and edited together to tailor a cohesive narrative. In this case, a majority of the clips were made up of fake news snippets, as well as “real” appearing interviews or phone calls done with paid actors. Although the narration in a plethora of these videos interchanged between different voices, the script was identical, with keywords such as “easy-application”, “hassle-free" or “anyone can qualify” being highlighted profusely.

What became alarming was the way in which several of these videos used deep fake voices of celebrities that had heightened appearances on TV. An example of this was Steve Harvey; due to his excessive appearance on television, his voice was used for deep fake duplicates to enhance the video's legitimacy as many Americans were familiar with his voice.

Considering the target demographic of these scam ads being elderly people, we found out why these videos seemed to appear as “amateurish”, as those behind the videos might consciously opt for less refined presentations in order to make the content seem more understandable and approachable.

4.2.3 Website Templates

These simple web templates were posed as questionnaires or quizzes (see Figure 17), which led to notices stating the subject (victim of the scam) was suitable for the $6400 benefit, and asking them to call a certain number to obtain more information. Using bright colors such as red, green, and blue for buttons on top of a plain white background was also another way of making the website as easy to navigate as possible.

Whilst the oversimplified websites offered accessibility and an easier method of navigation for the target demographic, they presented themselves as highly unreliable and questionable. Therefore, a portion of these scam ads also led to more professional-looking websites that had been built to enhance their authenticity (see Figure 18). These websites used muted colors, images of families, as well as more professional typography. Nonetheless, these websites also led to the same outcome as the previously mentioned website templates, with them requiring a form or quiz to be filled out, or a number to be called to gain more personal details.

From our case study of the $6400 subsidy scam, we found an even distribution of age and gender among audiences (see Figures 19 and 20). There was an increase in the number of people in the younger age groups (25-34, 35-44, 45-54), who were reached by scam/misinformation Facebook ad campaigns about the scam of $6400. In the meantime, the portion of users whose ages were 55-64 decreased as we compared it to general audiences approached by some typical Facebook pages in the US. However, this was still the biggest age group, which fake Facebook pages aimed at promoting the $6400 subsidy scam and other scam/misinformation programs in insurance, healthcare, and benefits. Meanwhile, there was only a slight difference in the portion of female and male audiences. These findings from this case study aligned with our above findings about advertisers’ focus on age rather than gender in developing problematic Facebook ad content in the US.

4.2.5 Websites Linked to Scam/Misinformation Facebook Ads

To identify whether the $6400 ads were related to each other in other ways than just as a concept we opted to construct a network in Gephi, a network visualization tool. As can be seen in Figure 21, some web pages are linked to by multiple different Facebook groups, signaling a connection between these pages. In addition, this allowed for a visual representation of the most linked websites, which seemed to have been taken offline mostly.

5. Discussion

Based on the above findings, Facebook ads in the US with the low amount spent (≤ $100) primarily exploited old audiences’ vulnerability to attractive services with offers regarding insurance, healthcare, and benefits. By relating to lead generation, advertisers leveraged Facebook’s affordances, such as the process of using audience targeting functions in setting up Facebook ads, which helped customize problematic ad content based on users’ interests. According to Skrmetti (n.d.), scammers started finding publicly shared personal information on audiences’ Facebook profiles before promoting misleading ads. This strategy made them proactive in understanding targeted people’s behavior to deliver misguided ad posts that addressed people’s needs. It increased the success of their scam/misinformation activities on Facebook. By discussing lead generation, Hutchinson (2023) highlighted a decline in referral traffic to many Facebook ads. It made various businesses or organizations find other ways to increase their ad traffic. This situation in the US might justify referral companies’ lax control over the quality assessment of Facebook ad content. They were open to referrals from all Facebook advertisers despite risks of scam/misinformation promotion as they needed profits in working with these advertisers.

With the comprehensive results about scam/misinformation Facebook ads in the US, our research also suggested that Facebook’s capabilities of detecting and deactivating these ads were still limited. Advertisers duplicated their ad content by creating similar versions, which slipped through Facebook’s filters and had thousands of impressions among small audiences. In our $6400 subsidy scam case study, this scam garnered 23,7-34,6 million impressions and generated around 669,500 - 1,470,681 USD in ad revenue over the past three months. The proliferation of unauthorized images of businesses and governments in Facebook profiles concerned the vague boundary between real and fake accounts. $6400 subsidy scam advertisers considered this problem an opportunity to trick users into trusting their misguided Facebook ad content and develop scam activities. By mentioning the same discussion, Binder (2021) showed that scammers used information from reliable businesses and governmental organizations (e.g., customer service email addresses) alongside images, which they put on their Facebook profiles to avoid deactivation while running scam ads. They also leveraged AI to generate fake faces in images from their Facebook pages. It helped make up misleading personas that reflected details about target audiences. This issue led to the insufficient verification of problematic Facebook ads before these ads were allowed to be promoted on Facebook.

Through advertisers’ focus on audiences’ age rather than gender, our above findings about audiences in the generalized analysis and case study indicated that scam/misinformation Facebook ads cared about how the ad content suited the preferences of each age group regardless of male or female users. This indication justified the request for age information as the highlighted field in advertisers’ scam forms connected to their ads. It also became the foundation for creating the standard templates of fake websites that promoted misleading insurance, healthcare, and benefits campaigns. Besides the templates, connections between these websites in our network analysis emphasized the significance of shared links among Facebook pages. Advertisers might collaborate to use these links in similar and different scam/misinformation Facebook ad posts, expanding the network of fake programs advertised on Facebook.

6. Conclusions

This research has shown that the ad library can be a valuable resource in identifying problematic practices on Facebook. Even though it only shows advertisements in a smaller subset of categories, the nature of these categories such as housing, credit, elections, and employment means deceitful ads can be increasingly harmful. This is due to the fact they often involve personal information and disadvantaged or emotional situations. It is therefore important to remain critical of the moderation in this space, especially in the niche spaces of

7. References

Adami, E., & Jewitt, C. (2016). Special Issue: Social media and the visual. Visual Communication, 15(3), 263–270.

Ankit. (2024, January 11). 6400 Dollars Subsidy Real or Fake? Let’s Know All About It. SarkariResult. 52639

Binder, M. (2021, October 29). Facebook scammers are hacking accounts and running ads with stolen money. Mashable.

Federal Trade Commission. (2022, August 8). FTC Action Against Benefytt Results in $100 Million in Refunds for Consumers Tricked into Sham Health Plans and Charged Exorbitant Junk Fees. -results-100-million-refunds-consumers-tricked-sham-health-plans-charged

Global Witness. (2022, August 15). Facebook fails to tackle election disinformation ads ahead of tense Brazilian election. Global Witness. n-disinformation-ads-ahead-tense-brazilian-election/ (n.d.). Gephi - The Open Graph Viz Platform (Version 0.10). Hamilton, E. (2023, March 8). The four factors that fuel disinformation among Facebook ads.

University of Florida News.

Hutchinson, A. (2023, October 3). New Report Shows Referral Traffic From Facebook and X Continues to Decline. Social Media Today. ontinued-decline/695550/

24 (n.d.). Affordable Care Act (ACA).

Inbound Medic. (2023, December 12). What Is Lead Generation In Healthcare? Inbound Medic.

Jacomy, Mathieu. (2013). Table 2 Net.

Jamdade, M. (2024, January 18). 6400 Dollar Subsidy Real or Fake - All you need to know about it!. BScNewsPortal.

Korn, J. (2022, October 21). Facebook and TikTok are approving ads with “blatant” misinformation about voting in midterms, researchers say. CNN Business.

Lewis, S. C., Zamith, R., & Hermida, A. (2013). Content Analysis in an Era of Big Data: A Hybrid Approach to Computational and Manual Methods. Journal of Broadcasting & Electronic Media, 57(1), 34–52.

Michael, C. (2023, November 15). Meta allows Facebook and Instagram ads saying 2020 election was rigged. The Guardian. d-stolen-instagram-policy

Meta. (n.d.). Meta Ad Library Report.

Skrmetti, J. (n.d.). What You Need to Know about Health Care Scams. Tennessee State Government. healthcare-scams.html

Webb, M., & John, B. (2021, March 31). We need to know more about political ads. But can transparency be a trap?. Nieman Lab. transparency-be-a-trap/

Wade, B. (2023, September 19). Pay-per-call: The ultimate matchmaker, pairing marketers with ambitious small business owners. Nimbata.

Topic revision: r2 - 04 Mar 2024, RichardRogers
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback