BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What Does It Mean To "Consent" To The Use Of Our Data In The Facebook Era?

Following
This article is more than 5 years old.

Getty

As Facebook has struggled to contain the fallout from its latest privacy fiasco this week, the company has repeatedly emphasized that participants “consented” and granted their “permission” for what amounted to spyware being installed on their phones to convert them into highly intimate surveillance systems. Indeed, the company’s response to nearly every one of its privacy stories over the past year has been to argue that its users legally consented to its practices so they have nothing to complain about. All of this raises the question of just what it means to “consent” in the Facebook Era?

Once upon a time, researchers wishing to collect intimate information from individuals were largely required to adhere to a standard known as “informed consent.” Informed consent refers to the idea that signing your name on the dotted line of a 50-page legal contract filled to the brim with arcane legal technicalities and unenforceable clauses may mean that the company has the legal right to do what they please to you, but that such practices are not ethical or moral because the individual likely didn’t actually truly understand what would be happening to them.

Instead, obtaining informed consent means carefully explaining to a research subject issues like the complete inventory of every piece of information that will be obtained from them, the reason for its collection, how it will be collected, how long it will be stored for, what it will be used for and who will have access to it. Most importantly, it entails explaining to the subject what the pros and cons of the research are, especially the unintended consequences and dangers they might face through their participation.

Informed consent is about the idea that research subjects aren’t merely rows in a spreadsheet or records in a database. They are real live human beings whose lives may be irreversibly impacted by the research being conducted on them.

For many decades this was the standard that prevailed in most human behavioral research.

The rise of the digital era with its massive repositories of datasets free for the taking have upended all of the ethical practices that had developed over the previous half century. Once researchers themselves were no longer collecting data, but rather making use of data that someone else had collected for an entirely different purpose, perhaps without the knowledge of the subject or even against the explicit demands of the subject, those researchers argued they shouldn’t be held responsible for ethical data use anymore, since they were merely using someone else’s data.

Even medical records and data stolen through computer breaches were suddenly fair game.

In essence, researchers were able to “launder” issues of data ethics by using data that others had collected.

It was the rise of commercial social media platforms that really accelerated this trend. As the companies began to publish in the literature using their exquisitely intimate and breathtakingly large datasets, the journals that in the past would have acted as ethical stewards and refused publication, suddenly could not resist the temptation to be the first to publish these large studies and so set aside their decades of ethical practices and welcomed these massive studies with open arms.

Funders rushed to support this new wave of informed consent-free research and academics raced to get their hands on large social datasets through any means possible, terms of service be darned.

As journals and funders normalized the idea that informed consent was no longer required and that mere legal consent per website click-through agreements was sufficient, informed consent has rapidly faded away from behavioral research.

In today’s world no academic can afford to work only with data that has been obtained through informed consent. That would place them at such a disadvantage that they would lose all possibility of obtaining tenure or promotions. Or at least that seems to be the unfortunate view that has become so prevalent in academia of late.

In turn, the rise of the microtasking gig economy through services like Amazon Mechanical Turk brought this disdain for informed consent to academic data collection. An academic researcher who pays a group of community members to sit for a 15-minute interview in their university lab must go through extensive ethical review. Recruit those subjects on Amazon Mechanical Turk and most institutions require no ethical review at all in practice, considering it exempt since it is collected online.

In essence, as our interactions with other humans are increasingly mediated through digital channels, we have dehumanized them from living breathing people to mere datapoints devoid of any rights, protections or considerations.

Even children, once an extraordinarily tightly protected research class, can now be mass harvested and manipulated at will without so much as the most cursory ethical review, so long as they are accessed online, rather than in person.

Academia has wholeheartedly embraced this retrenchment from the traditional norms of data ethics, which in turn has produced a steady stream of researchers headed for the big technology companies that arrive viewing ethics as an outdated quaint relic of history. In turn, as these researchers push ethical boundaries ever further and publish those studies in the literature, academia rushes to match this newly lowered ethical bar, producing a new cohort of researchers that head to the companies to push the bar even further lower in an endless downward cycle.

The journals that once firmly condemned or refused to publish even the most minimally ethically questionable research now warmly welcome studies that lack informed consent or the right of subjects to opt out.

Ironically, even the new data ethics initiatives refuse to actually talk about data ethics.

Legal standards continue to be lowered as companies find new ways to redefine outside entities as “partners” and “providers” so that they can freely share our data with them without violating their legal agreements.

In the case of Facebook’s latest fiasco, the company has repeatedly emphasized that it had “parental consent” for the teenagers as young as 13 years old that granted it the right to harvest their data.

Yet, when I asked how precisely it ensured that teenagers didn’t just sign the consent forms on behalf of their parents and how they controlled who had access to the intimate details of children so young, the company never responded.

It turns out the reason for Facebook’s silence is that the company’s “parental consent” was merely a sadly comical click-through form that did not require even the most rudimentary of verification to ensure that teens didn’t just sign for themselves.

It is a truly extraordinary commentary that we’ve reached a point in data ethics where companies now see it as entirely acceptable to treat simple click-through forms that teens can fill in themselves as proof of “parental consent” for a child as young as 13.

Moreover, even for those teens whose parents actually did read through all of the forms and did actually approve of their child granting Facebook the right to harvest their most intimate data, how many of those children or their parents were thinking of what might happen if that data was exposed years later and what harm it might do to them?

Who had access to all of this data and what will happen to the data that has been collected? The company did not respond to a request for comment.

How long will it be before academic researchers gain access to the data of these children to use for their own ends? Given current academic IRB ethical trends, it is unlikely that any major US university ethical review board would raise even the slightest objection to treating this data as exempt “preexisting public data” and permitting academic researchers to do as they please with the digital lives of these children.

Putting this all together, the digital world has undone decades of progress on building ethical standards around how the public is treated when it comes to research. It took a series of atrocities for society to agree to the ethical standards that governed research up through the digital era. What will take for us to reconsider the rapid rescinding of all those once sacrosanct rules that have occurred over the last few years? It seems breach after breach, scandal after scandal has done nothing but accelerate this trend, with companies treating privacy scandals not as evidence that we need more ethical protections, but rather as evidence that ethical protections are outdated and irrelevant in the digital age.

In the end, perhaps we’ve been so programmed by the digital world that we don’t care about our privacy anymore and welcome the idea of being reduced from being human to being just a number. If so, perhaps the AI revolution is closer than we thought.