BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is It Time To Regulate Biased And Harmful Stereotypes In Social Media Ads?

Following
This article is more than 4 years old.

Getty Images

Society has become increasingly aware of the harmful stereotypes and biases perpetuated by the behaviorally-targeted and interest-based advertisements that power the modern digital age. Based on narrow slices of our lives and faulty data that can be upwards of three-quarters wrong, the ads that ceaselessly bombard us online tend to reinforce outdated stereotypes about the kinds of interests and careers that different demographics should have. Is it time for government regulation?

The modern Web is powered by personalized advertisements that are precision targeted to each user individually based on rich dossiers compiled of their online and offline behavior. These exquisitely detailed catalogs of daily life supposedly allow precisely the right advertisements to be shown at precisely the right moment when that user is maximally manipulatable.

Unfortunately for this popular myth, the truth is that these vaunted dossiers that supposedly know our every interest can be more than three-quarters wrong. With data that is overwhelmingly and wildly wrong, it is clear that the ads we see are not the hyper-personalized magic bullets they are portrayed as.

Worse, since social media captures only a small slice of our lives and data brokers frequently fail to accurately capture the rest, the interest selectors used to show us ads reflect an incredibly small and massively biased view of our lives.

Making matters worse still, many ads rely on external lead lists, audience selection and coarse demographic selectors. Women in STEM fields are far less likely to see many kinds of ads related to their careers given that these ads are often explicitly targeted only to men. Gender-specific advertising disadvantages the LGBTQ and gender-fluid communities. Workers over a particular age are unlikely to be notified of opportunities frequently aimed at their younger peers. Audience selectors and lead databases inadvertently built from homogenic sample groups mean minorities of all genders may be less likely to see such ads.

Despite strict laws governing bias in regulated areas like housing, ads that illegally discriminate proliferate freely.

Yet most biased ads are not illegal.

Showing ads about technology products, short courses, books and other STEM products only to men does not necessarily violate any laws, just as showing ads for makeup and fashion only to women is unlikely to cross any legal lines.

The problem is that while they may be entirely legal, targeting such topics either directly or indirectly to gender reinforces historical stereotypes that society has increasingly recognized as extremely harmful and toxic.

Advertisers might argue that the success rate of showing makeup ads to women is far higher than showing them to men and that they similarly receive a vastly higher rate of return showing ads related to science, technology, engineering and mathematics to men than to women. Hard data might show that pink glittery t-shirts that proclaim “math is hard” sell well to women while t-shirts with technology company logos sell well to men.

It may very well be that the numbers bear out these stereotypes. After all, targeted advertisements are designed to eliminate human intuition and societal values and use cold calculating mathematics to show ads to those who click on and engage with them the most.

The problem is that such targeting becomes self-perpetuating. If STEM women are never shown STEM ads, they cannot engage with them to illustrate their interest.

Similarly, an LBGTQ individual to whom social platforms have assigned “male” gender selectors but who is avidly interested in makeup, lingerie and other products advertised on social platforms only to “females” will not see the ads they are interested in, again perpetuating these stereotypes.

This raises the question of whether the same kinds of regulations and protections that govern areas like employment and housing should be extended to other fields?

What if social media companies simply eliminated demographic selectors entirely? Explicit selectors that allow targeting based on race and gender could be removed today by the platforms with a few mouse clicks.

Yet many other selectors are highly correlated with these, especially those that originate outside social platforms from data brokers that rely on opaque data sources and assignment formulas.

Imagine if social media companies measured the demographic breakdown of every advertising selector on their platforms and deleted all selectors that have a high imbalance along racial, gender and other demographic lines. For example, if a particular “interested in programming” selector yielded 90% matching to users identified as “male” or racial majority, that selector could be deleted.

Similarly, every new ad campaign could be checked to see if it would result in a significant demographic imbalance in those seeing the ad and automatically rejected.

Thus, an ad for coffee might target men and women roughly equally and be permitted. An ad for a high school math camp that combines a set of selectors to yield a 90% bias towards users tagged as “male” could be rejected on bias grounds. The ad system could suggest which selectors to remove or ad to make the ad more equitable.

Of course, the unfortunate reality is that absent a regulatory sea change, it is almost inconceivable that social media platforms would support any changes that would restrict in any way the ability of their advertisers to precisely target their ads. Targeted ads are simply too lucrative for them to risk any disruption to their economic engines even to address issues as critical as bias, discrimination and harmful and toxic stereotypes.

Asked about whether it feels responsibility to address discriminatory and biased advertising that reinforces harmful and toxic stereotypes, Facebook did not respond to a request for comment.

Putting this all together, our modern hyper-personalized ad system that has become the economic engine and lifeblood of the modern digital world is built upon a foundation of bias and enforced discrimination based on outdated and harmful stereotypes.

In the end, however, biased ads are simply too valuable to social platforms to risk fixing.