BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

What The ‘Facebook Papers’ Reveal About The Social Network’s Advertising Business

The “Facebook Papers” have been revelatory when it comes to how the company formerly known as Facebook deals with content moderation and a wide variety of other issues. However, they also shed light on the social network’s advertising business and how users respond to the ads they see. The thousands of pages of internal documents gathered and released by whistle-blower Frances Haugen provide myriad examples of how Facebook—which rebranded itself as Meta last month—and its subsidiaries develop ideas, roll out products and research users.

Much attention has focused on explosive details about how Facebook deals with misinformation and the impact of its content on mental health, body image and political perceptions, which prove to be especially toxic for female politicians. But documents also offer fascinating glimpses into Facebook’s ad products, how they’re perceived by users and marketers, and how the social network’s employees have sought to deal with a range of concerns.

One document from February 2016 raises questions about the impact of Facebook’s reaction emojis, which debuted earlier the same year. A post written by an unnamed Facebook employee detailed feedback from an advertising client at Joyable, the mental health services company, who asked that the “angry” emoji be removed from their ads. (The document also revealed that Joyable was spending $2,500 per day on Facebook ads and nearly $1 million a year.) Although just 5 of the 75 emoji reactions were “angry” on the Joyable ad—which included the text: “In 5 minutes, you could start overcoming social anxiety”—that was 5 “angry”s too many. The client complained to the Facebook employee, explaining, “It’s bad for our brand to have people publicly panning our ads on Facebook” and that “I’m sure it’s bad for our ROI also.” 

“This is a particular problem for us because mental health is polarizing,” according to the comment from the Joyable rep to the Facebook employee. “It’s already a headwind for us to advertise on Facebook because of comments like ‘Social anxiety isn’t real,’ ‘Take your head out of your phone,’ or ‘Just drink alcohol.’ This is making it much worse.”

The revelations come from the hundreds of documents provided by Haugen to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. Redacted versions received by Congress were obtained by news organizations last month, including Forbes.

While speaking to British Parliament last month, Haugen said that serving up “hateful, angry, divisive” ads was cheaper than other kinds of ads. She added that Facebook’s ads were priced partially on the probability that users would interact with them.

“We have seen over and over again in Facebook’s research, it is easier to provoke people to anger than to empathy or compassion, and so we are literally subsidizing hate on these platforms.”

Facebook whistle-blower Frances Haugen, addressing British Parliament

“It is cheaper, substantially, to run an angry, hateful, divisive ad than it is to run a compassionate, empathetic ad,” Haugen said. “And I think there is a need for things even discussing disclosures of what rates people are paying for ads, having full transparency on the ad stream and understanding what are those biases that come and how ads are targeted.”

The Facebook Papers have already prompted at least a few brands to pull their advertising. Earlier this month, the egg-and-butter producer Vital Farms announced plans to pause all paid advertising on Facebook and Instagram “until we are confident that the content on their platforms is managed responsibly and not intentionally posing harm,” explaining that it’s “our small part to move a big conversation forward.”

Last week, Lush went a step further by deleting all of its social media accounts—going beyond Meta properties and also shutting down thousands of accounts across Snapchat and TikTok. The British cosmetics giant explained that it “wouldn’t ask our customers to meet us down a dark and dangerous alleyway” and called on regulators to pass laws to protect customers from “the harm and manipulation they may experience whilst trying to connect with us on social media.”

“I’ve spent all my life avoiding putting harmful ingredients in my products,” Lush cofounder and CEO Mark Constantine said in a statement. “There is now overwhelming evidence we are being put at risk when using social media. I’m not willing to expose my customers to this harm, so it’s time to take it out of the mix.”

How Political Ad Preferences Shape Facebook’s Perception

Perhaps one of the biggest revelations about Facebook’s advertising business has been how it’s handled politics. Even before the November 2020 election, political ads have been perceived as unappealing. Internal Facebook documents from March 2020 found that users who were flooded with political ads were less happy with their experience on the platform—and just as likely to close out of them as they would close out of nonpolitical ads. When comparing users’ negative reactions to political ads with negative reactions to nonpolitical ads, Facebook researchers said users closed out of “sexually suggestive” or “scammy ads” at similar rates to that of political ads.

In the report—titled “How do political ads impact user sentiment towards FB ads?”—researchers found that political ad impressions accounted for more than 8% of what one tenth of users saw in their feeds during the first two months of last year. Facebook’s research found that when political ads made up more than 10% of users’ news feeds, they reported being “somewhat or very dissatisfied” at a rate 1.3 times higher than those whose feeds were made up of 0% to 1% of political ads. In fact, users with high exposure rates to political ads were “significantly more likely to be dissatisfied with their Facebook ads experience.”

The findings also shed light on why people are so bothered by the ads. For example, some users had negative reactions to ads they perceived as “misleading, offensive or fake news content.” Other respondents said they didn’t like ads that they felt contained a “sensitive topic” or didn’t match their own political affiliation. (However, Facebook’s researchers said affiliation mismatch didn’t fully explain most of the ads that people closed.)

A survey of 3.6 million Facebook users conducted by the company earlier this year found that 30% of young adults in the U.S. reported “seeing too many ads.”

Facebook’s research also found that users weren’t very likely to see ads from opposing viewpoints. In fact, conservative users saw less than 2% of ads from primarily liberal audiences and around 10% from Democrat civic graph pages. Meanwhile, liberal users saw “almost no ads” from Republican civic graph pages and less than 5% from primarily conservative audience pages. (The most likely ads to be x-outs were pages affiliated with moderates.) Researchers also suggested fixing the issue by having Facebook more prominently display to users how to change their ad preferences while also limiting overall ad loads of political ads. 

In a separate report titled “Effects of Turning Off Political Ads,” dated August 25, 2020, the author wrote that users saw “slightly less civic content” after political ads were turned off for two weeks. However, researchers found that clicks on civic content stayed about the same even while they saw more content from Groups instead of Pages.

Documents also show how Facebook has struggled with political ad transparency. In a post from November 9, 2018, a Facebook employee explained that there was “nothing we could do” when it came to some aspects of properly labeling advertisers or preventing manipulative actors. For example, the company found out that a Facebook Page associated with a cluster of right-wing Pages had bought ads appearing to support liberal causes without being labeled as being bought by advertisers with conversation ties. The tactics—known as astroturfing—is something Facebook has associated with voter suppression.

Over the past few years, Meta has taken additional steps to improve transparency of political ads and more accurately authenticate them. In 2019, the company began requiring political advertisers to provide more information about their organizations before ads could run. In 2020, it began preventing new ads on social issues, elections and politics from running between October 27 and November 3.

“Since the 2018 midterms, we have strengthened our policies related to election interference and political ad transparency,” a Meta spokesperson said in an emailed statement to Forbes. “We continue working to make political advertising more transparent on our platform and we welcome updated regulations and help from policymakers as we evolve our policies in this space.”

Other leaked internal files in the Facebook Papers detail Facebook’s war against vaccine and QAnon misinformation during the Covid-19 crisis across ads, posts and comments. For example, a document from March 2021 uses an analogy of a rock thrown into a pond to describe how misinformation spreads, where the rock is “bad content entering our system” and the ripples are how the social network responds. While the company has sought to “stop as many rocks from being thrown,” “mute the ripples” and “fill the void with good content and conversations,” the document explains there’s more to do with mitigating misinformation across ad frame, video ad breaks and instant articles. (Facebook has also considered using vaccine hesitancy content as a possible case study for harsher enforcements against foreign ad farms.)

“The fact remains that anything near and dear to people’s hearts, such as their health, people will exploit for profit, and authoritative info sells less well than fear,” according to the document.


The Impact Of Whistle-Blowing So Far

Revelations from the Facebook Papers have also given politicians around the world more fodder for investigating Meta on a number of fronts. While Haugen has already testified in front of Congress, members of the U.S. Senate will speak with Adam Mosseri—the head of Meta-owned Instagram—on December 6 as part of a series of hearings about how to protect children online. (Meta’s head of safety, Antigone Davis, also met with lawmakers back in September and disagreed with allegations that the company’s platforms are harmful for teens.) Meanwhile, lawmakers in the U.S. House and Senate are considering several pieces of legislation related to data privacy and antitrust.

European Union leaders have also cited the Facebook Papers as another reason to move forward with a proposal to regulate Big Tech’s political advertising and other parts of business. Earlier this month at the Web Summit tech conference in Lisbon, Věra Jourová, European Commission vice president for values and transparency, said lawmakers “would not be able to convince the people that regulation is needed” if whistle-blowers like Haugen and others had not shed light on the company’s internal processes. 

“If we want to be sure that people are free to choose, we need to make sure the information they see online is not fueled by obscure functioning of platforms, algorithmic systems and an army of undetected bots.”

Věra Jourová, European Commission vice president for values and transparency

In the weeks following the Facebook Papers release, Meta made a number of changes, including to its advertising business and its data privacy policies. On November 2, the company announced it was shutting down facial recognition software that had been criticized by consumer advocates. One week later, it announced it would no longer allow advertisers to buy ads based on data related to users’ race, political affiliation, sexual orientation, religion and health—information deemed too sensitive to be used in targeted messaging.

While speaking onstage with Forbes at Web Summit, Christopher Wylie—a former employee of Cambridge Analytica who shed light on Facebook’s data privacy issues back in 2018, when he came forward as a whistle-blower—said the Facebook Papers and the discussions around them feel to him like “déjà vu.”

“Really déjà vu for me with the Senate hearings, and all this,” Wylie said. “We’re just talking about the same thing over and over and over again. We’re sort of stuck in this loop, and I think one of the problems is it’s clear there are a lot of problems, and those are constantly being discussed, but we’re sort of missing the conversation around solutions and frameworks for regulation.”

Brand Safety Issues Are Déjà Vu, Too

Facebook employees also expressed concern about the company allowing right-wing websites to be a part of its broader network of publishers. In a post dated June 4, 2020, a Facebook employee wrote “Do I need to explain this one” attached to a photo with a number of Breitbart News headlines related to the George Floyd protests. An earlier post in October 2018 was written by a Facebook employee working on the Facebook Audience Network, a group of more than 50,000 publisher websites that Facebook advertisers can reach off the social network. The post—titled “We need to talk about Breitbart (again)”—argued that while Facebook claimed to be politically neutral and that Breitbart had not yet seemed to violate any Facebook policies, allowing the website to monetize through Facebook was “a political statement.”

The Facebook employee, whose name was redacted from the document, said that 11,000 advertisers had added Breitbart to their list of websites on which they wanted to avoid advertising, adding that there were 30,000 block lists in existence—and nearly every advertiser with a block list included Breitbart. (Facebook finally removed Breitbart from its audience network last year.)

“When talking about brand safety, which is a huge deal for most of our advertisers and the second-most likely reason for advertiser churn, we also hear about Breitbart,” the Facebook employee wrote. “When they talk about which publishers they want to block, it’s often them…This isn’t right-wing news, which I agree should be allowed jut as much as left-wing news, it’s vitriol. It’s losing us advertisers, trust, money, and moral integrity on a daily basis. We need to reconsider and act.”

When asked about the decision to remove Breitbart from the Facebook Audience Network, a Meta spokesperson told Forbes that the company defers to third-party fact-checkers, who rate specific pieces of content and manage internal systems for repeat offenders. Those systems can apply penalties when a page’s content receives multiple false ratings and block them from receiving monetary incentives or advertising.

“We make changes to reduce problematic or low-quality content to improve people’s experiences on the platform, not because of a Page’s political point of view,” the Meta spokesperson said. “When it comes to changes that will impact public Pages like publishers, of course we analyze the effect of the proposed change before we make it.”

Other documents show how Facebook directed its global sales team to articulate its response after the January 6 attack on the U.S. Capitol. In an internal document labeled as a high priority, the company encouraged employees to “respond reactively” to questions from clients, even offering a script. In response to a question about whether advertisers should pause spending, it outlines how Facebook was reviewing and removing content that broke its rules related to inciting violence on the platform. Another answer explains why the company removed a video posted by then U.S. President Donald Trump, suggesting that his posts “contribute to, rather than diminish, the risk of ongoing violence.” 

An update on January 7 included answers to questions about Facebook’s decision to block Trump from posting, including why it had decided that now was the time to take action. On January 15, it updated the post to include responses regarding whether the company saw violence coming ahead of time and, if so, why it didn’t act. The post noted that prior to the attack, Facebook had removed more than 600 “militarized social movements from our platform,” as well as the original “Stop The Steal” group and various hate groups.

“Demand Side Problems”

In an internal note to staff dated August 2020, longtime Facebook veteran Andrew Bosworth—who joined the social network in 2006, built Facebook’s mobile advertising business and took on a number of other high-profile roles before becoming chief technology officer in September—described Facebook’s problems moderating hate speech as one of supply and demand. (The post was also published to Bosworth’s public blog in January 2021.) He said the more Facebook invests in ways to improve its content quality controls, “the harder people work to circumvent those tools.”

“As a society we don’t have a hate speech supply problem, we have a hate speech demand problem.”

Meta CTO Andrew Bosworth

“Online platforms work on the supply side because they don’t control the demand side, and they will continue to invest huge amounts there to keep people safe,” he wrote. “It is key component of our responsibility as a platform (and I think we do it better than any of our competitors here at Facebook). But until we make more social progress as a society, we should temper our expectations for results.”

Bosworth also compared overcoming the issue to building Facebook’s ads business between 2012 and 2017, saying it’s “very tempting to focus on growing the supply of spaces to show ads because that gives a predictable return on investment.”

“However, if demand remains fixed you are just buying your way down the inventory in terms of quality and into diminishing marginal returns,” he wrote. “Instead we focused on the demand side, which is slower to move and harder to measure, but increases the value of existing inventory.”

A “Medieval” Meta City

Perhaps one of the most interesting metaphors for Facebook and the Facebook Papers comes from an October 2018 post titled “A note about plagues.” The author wrote that Facebook is “currently a medieval city,” and while such a city may have marketplaces, art galleries, universities and inventions, there are also plagues.

“Before you even realize what is happening, it sweeps through the city like a fire,” the author wrote. “Its virulence is fearsome. You have never seen anything like it. You are trying to contain it, but nothing that you can do seems to have any effect. The plague eventually ends. But the city lost many inhabitants. The ones that survived are scared.”

The employee notes that some people in the medieval city might say “this was a consequence of some simple error” that can be fixed to prevent future plagues, while others might suggest it’s “not our problem” if disease is inevitable:

“But that would also be a mistake. In fact, this is largely our doing. By creating this city we opened huge opportunities for the people, but we have also opened great opportunities for the germs. We have put people closer together that they were ever. We squeezed them so that waste produced by one infects others. We increased numbers of contacts everyone has. By doing this we broke some thresholds that prevented local outbreaks from turning into global ones. This is our responsibility.
Some might say that this is just how the world works. Those who want to live in a city and profit from it, have also to accept the risk of catching a disease. But that would also be a mistake. This problem can be solved.”

The post also notes that cities have built sewers for waste, filters for drinking water, insecticides for fleas and antibiotics and vaccines for disease—suggesting that problems have been solved to mitigate major issues in the past.

“People come here and get value from it, but they face new dangers that they are not used to,” the author wrote. “It is something that never existed in the history of the world, so it is entirely reasonable that we do not understand its consequences yet. But it doesn’t mean that we should accept them. We have a unique opportunity to study them and find solutions.”

Whether Meta can emerge from its “medieval” Facebook era and enter a renaissance period is yet to be determined. But on the company’s third-quarter earnings call in October, Mark Zuckerberg, Facebook’s cofounder and Meta’s CEO, dismissed the Facebook Papers as a way to “paint a false picture of our company” while noting that “good faith criticism helps us get better.”

“I also think that any honest account should be clear that these issues aren't primarily about social media,” Zuckerberg said. “That means that no matter what Facebook does, we're never going to solve them on our own. For example, polarization started rising in the U.S. before I was born. . . . The reality is, if social media is not the main driver of these issues, then it probably can’t fix them by itself either.”

Follow me on Twitter or LinkedInSend me a secure tip