BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Should Social Media Be Responsible For Illegal Ads On Their Platforms?

Following
This article is more than 5 years old.

Ads are the lifeblood of our modern web, transforming the information economy from one of monetary payments to one of personal data bartering. Powering this ad revolution is the ability to precisely target advertisements to just the pinpoint demographics an advertiser is interested in reaching. This immense precision, however, is increasingly coming into conflict with laws designed to prevent discrimination, as advertisers are allowed to exclude their ads from being shown to those of certain genders, races or age ranges in violation of the law. In turn, the social media companies have argued that they are merely neutral platforms and it is up to advertisers not to break the law, even if the platforms they have designed directly facilitate and assist in running discriminatory ads. This raises the question of whether social media platforms should be held responsible for illegal advertisements on their platforms?

The modern digital advertising economy is built upon precision demographic and interest targeting. Platforms like Facebook allow advertisers to request that an ad for an apartment building not be shown to minorities or to those with families or disabilities. They can request that a job ad not be shown to women or to those over a certain age. The platforms make such illegal advertisements as easy as possible, offering one-click discrimination, while profiting from their users breaking the law.

From the standpoint of advertisers, many argue they are merely attempting to maximize the return on their advertising investment by targeting the demographics they believe will have the highest rate of return. Much as product manufacturers market their wares to the demographics that purchase them the most, while avoiding spending heavily on ads to those who almost never buy their products, so too many advertisers argue, should they be allowed to target their housing, employment, credit and other offerings to those they believe will be the most responsive to them. Why should they be forced to waste large sums of money targeting ads to those whom historically have had no interest in the products or services being offered?

The problem with this mindset is that it reinforces stereotypes and encourages discrimination and for that reason is in fact illegal under US law.

Companies argue that in the pre-digital era it was common practice to run ads on different print and broadcast outlets and specific shows in a given city to target their differing demographics and that this practice is completely legal. Facebook itself defends its ad targeting by arguing that “simply showing certain job ads to different age groups … may not in itself be discriminatory - just as it can be OK to run employment ads in magazines and on TV shows targeted at younger or older people.”

What this argument misses is that while this kind of demographic targeting ensures that an ad is mostly seen by the target demographic, anyone watching that show or reading that newspaper or magazine will also see the ad. In short, a job ad for carpenters placed in a woodworking magazine might presume to target mostly men, but women woodworkers will also see the ad, ensuring that while the advertiser might be targeting men, in reality the actual targeting will be of those interested in woodworking regardless of gender.

In contrast, an employment ad run on Facebook that is filtered to only men will never be seen by women, regardless of whether they have exactly the qualifications required for the job. A Facebook spokesperson emphasized that in such a case a woman who knows about a specific brand's Page can use Facebook’s “View Ads” feature to see all of its ads regardless of what demographics those ads target. Thus, a woman who has an interest in a specific company can manually wade through its entire list of ads to find employment related ones she ordinarily would never have seen. However, this places an inordinate burden on her and places her at a considerable disadvantage from her male peers who receive men-only targeted ads from a range of brands they may never have heard of or know were hiring.

The platforms argue that discriminatory ads are not their responsibility, that despite making it as easy as possible to create discriminatory ads, including one-click options for the illegal targeting categories, they should be immune, since they are merely running the ads, not creating them in the first place. Further, they point out that they remind ad creators that the law prohibits certain kinds of ad targeting and asks them to click a button confirming that their ad doesn’t discriminate. In short, even though they have customized their platforms to explicitly make it possible to run illegal and discriminatory ads, it is not their responsibility when those ads are run, because it is the advertiser who should have known better and besides, they made the person click a button promising that their ad was OK.

The US Government seems to disagree, filing a formal complaint against Facebook in August for its role in facilitating and publishing discriminatory housing ads.

Shortly after the Government’s complaint Facebook announced that it was in the process of removing more than 5,000 targeting options to “help prevent misuse” and that among those options, it was limiting the ability to target ethnicity and religion.

Yet, this raises the question of why Facebook is only now in late 2018 taking action to restrict some of these targeting options when the issue of discriminatory ads has been under discussion for years on its platform.

Most importantly, it raises the question of why Facebook doesn’t simply modify its advertising tools to prevent targeting of ads in ways that would violate the laws of the countries in which the ad is being run? After all, what legitimate purpose would an advertiser have for running a housing ad that explicitly excludes minorities or a job ad that is not shown to women? Limiting ads to the same interest-based selectors of the print and broadcast era would prevent advertisers from breaking the law and would have transformative benefits for society in eliminating common pathways for reinforcing stereotypes and discrimination.

Of course, bad actors will always find ways of gaming the system, which in turn raises the question of why Facebook doesn’t devote greater resources towards policing its advertisements. If it can employ a small army of content moderators to manually review the billions of posts published by its users each day, surely it can manually review a larger random sampling of all the ads run on its system each day. While the total number of brand new ads uploaded each day is likely beyond the scope of total human review, the company could certainly spot check a random sample each day, especially those with specific targeting options selected and those run by new advertisers who do not have a long advertising history on the platform. Perhaps Facebook could even leverage its work on building AI models that can recognize "fake news" to flag potentially illegal ads for human review.

A company spokesperson did not respond to these questions.

Putting this all together, what responsibility do the major social platforms have for stopping illegal advertising? Given how easy they have made it to run discriminatory ads, making it a one-click affair, should they modify their interfaces to make it more difficult to run ads that are against the law? Moreover, should the companies perform greater moderation of their ad platforms like they do user posts? In the end, only time will tell whether governments reign the socials in or whether they redefine our laws and protections for the digital era.