BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook's Disaster Maps And The Limits Of Privacy Preservation

Following
This article is more than 4 years old.

Getty

Last week Facebook took some time out of its F8 conference to tout its Data for Good efforts, especially its Facebook Disaster Maps. The maps themselves are designed to surface only aggregated societal-scale patterns and rely on partnerships with NGOs to insulate the mapping platform from direct government access. However, one of the great concerns is that as governments see Facebook’s vast data archives in this new light, no matter how positive it may be, they will inevitably turn to the courts repurpose this vision of mapping Facebook users for good into a new ultimate Orwellian surveillance machine. As the company provides additional details about the program's relationship with governments, it reminds us how it sees our personal data and the limitations even of the most advanced privacy techniques.

While Facebook did not respond to my original request for comment on its Disaster Maps program, the company subsequently agreed to provide additional detail on the program’s relationship with governments in the complex and chaotic environment of a major disaster’s aftermath and how it attempts to protect the privacy of its users’ data.

No technology can be designed such that it can be used exclusively for good. No matter how positive and society-improving a technology is, there will always be bad actors that seek to repurpose it for evil.

Almost from its inception, Facebook’s Data for Good efforts have raised concerns in the data ethics and privacy communities, as well as among the very aid community they are designed to support. Those concerns have been amplified over the past year by the steady drumbeat of past privacy breaches pouring out of the company.

One of the concerns raised by Facebook’s release of the Disaster Maps is that it has given governments across the world a blueprint for how to transform its global data harvesting platform into a realtime surveillance map.

Once governments see how their disaster response partners in the humanitarian sector are using Facebook data to create privacy-protected aggregate maps for disaster response, it is simply inevitable that those same governments will themselves approach Facebook using lawful court orders to force it to create similar mapping platforms, but ones that can map the precise locations of individuals and vulnerable communities in realtime, rather than the Disaster Map’s aggregate society-level views.

While Disaster Maps are designed with a number of privacy protections in place to ensure they cannot be used to observe the movements of individuals, once governments become accustomed to their partners mapping civil society in motion, it will only be a matter of time until Facebook finds itself under court order to provide realtime maps for far more nefarious purposes.

This raises the question of how Facebook originally evaluated the idea of creating Disaster Maps. In particular, how did it weigh the benefits of additional insights for aid organizations against the risks that such maps would prompt governments to place greater legal pressure on Facebook to create similar tools for surveillance purposes?

While no system is entirely safe, often there are design considerations and policy and legal safeguards that can help mitigate certain outcomes, if they are baked into the process from the very beginning.

When asked about this, the company emphasized that groups from across Facebook were involved in the original planning and throughout the design and implementation process, including its legal, privacy and data sciences teams.

When asked what outside data ethics and privacy experts and organizations were involved to play devil’s advocate in how the system might be misused and design considerations that might mitigate those concerns, the company again reiterated that its corporate legal and privacy teams were an integral part of the design process.

Yet, when asked again for a list of the external experts and organizations outside of Facebook that were involved and how they viewed the risk-reward tradeoff, the company said it would have to investigate that question in more detail and this article will be updated if the company eventually provides such a list of its external partners.

It will be interesting to see how many privacy and ethics organizations Facebook engaged in the design of its Data for Good offerings and their perspectives on the unique tradeoffs when it comes to Facebook’s uniquely intimate data.

It is troubling, however, that Facebook could not immediately name a single outside expert or organization, nor even confirm that it had worked with any external data ethics and privacy organizations in the design of its system. It also notably does not prominently tout any privacy and ethics advisors in its public materials.

When speaking with privacy-forward companies, any discussion of privacy safeguards typically involves a rapid-fire list of all of the external experts and organizations that were consulted in the system’s design and their arguments for why the final implementation prevents misuse. Companies typically are overjoyed to connect inquiries with these external experts and typically list them prominently on their public materials in the form of a standing ethics and privacy advisory board.

In contrast, Facebook’s response was in keeping with the company’s response to all of its privacy stories over the past year: an emphasis on its internal review processes that, as the past year has reminded us, have failed it again and again, rather than external collaborations that could bring fresh perspectives to the company’s approach to the use of its two billion users’ most sensitive and intimate information.

More interestingly, however, is the company’s perspective on the ability of its privacy safeguards to truly protect its users against bad actors, reminding us that even the most advanced privacy safeguards can only do so much.

Facebook spends considerable effort emphasizing the privacy-protecting features of its Disaster Maps, including their use of spatial aggregation and random noise, minimal population count thresholds and other safeguards. It argues that the confluence of these protections ensures the privacy and safety of its users and that no individual user’s data can be surfaced using its system.

At the same time, Facebook goes to great lengths to note that government officials are not granted direct access to its Disaster Maps portal and that it only works with “trusted organizations that … respect our privacy standards.”

In its original announcement of its Disaster Maps program in June 2017, Facebook announced that “over time, we intend to make it possible for … governments to participate in this program.”

However, according to the company, government agencies across the world are not permitted to have direct logins to its mapping portal and instead receive maps from the portal through Facebook’s NGO partners.

In November 2017 the Indian Government’s National Disaster Management Authority (NDMA) co-organized a conference with Facebook at which it announced that “Facebook will now share its disaster maps, developed using aggregated, de-identified data, with NDMA” and repeated this announcement two months later. Media reports from the day of the conference summarized that “Facebook has partnered with the National Disaster Management Authority (NDMA) and a non-profit body, SEEDS in India, to offer tools to these agencies to help them respond more effectively to natural disasters. The United States-based firm will offer ‘disaster maps data’ that illustrate aspects like people’s movement and concentration of Facebook users in the given area before and after a calamity.” Reports noted that “As part of the effort, Facebook will make data from 'Disaster Maps' available to the National Disaster Management Authority (NDMA).” Facebook itself linked to the NDMA’s page the same day announcing that “working closely with the National Disaster Management Authority, India (NDMA), these initiatives include the roll out of Facebook Disaster Maps, the first Disaster Response Summit in India, and the Disaster Information Volunteers initiative implemented by SEEDS India.”

Neither NDMA nor SEEDS responded to requests for comment on how they use the Facebook maps and how the two organizations collaborate around Disaster Maps.

Asked to comment on this collaboration, Facebook took great pains to emphasize that the NDMA, as a government agency, does not actually have access to Facebook's mapping portal directly and instead receives maps and other insights from SEEDS, which does have a login. Asked to clarify this arrangement, the company likened it to a service provider collaboration in which NDMA could request maps from SEEDS, which would actually log in to Facebook’s portal, generate the map and provide it back to NDMA, ensuring that no government employee actually had a login to Facebook’s mapping portal or the ability to create their own maps unsupervised.

The company confirmed that it uses a similar arrangement in the United States, in which during Hurricanes Florence and Michael, Humanity Road shared the maps and other insights it created from Facebook’s portal with “FEMA, the US Coast Guard and state-response agencies.” It emphasized that many maps are actually shared with the general public as well.

Once again, Facebook emphasized that no FEMA, Coast Guard, state response or other government employee has a login to its Disaster Maps site and all of them must consume content from the site via an NGO partner.

Asked whether the information provided to government first responders was in any way downgraded, degraded, redacted, limited or otherwise modified to remove sensitive details, Facebook noted that it was not.

Asked whether there were certain kinds of maps that NGOs were prohibited from sharing with government collaborators, once again Facebook noted that there were not.

Asked what the difference was between having an NGO log into Facebook’s portal, download a map and email it to FEMA versus just having FEMA log in and create the map directly, the company noted that there was no difference in the actual map itself, but that the critical distinction was that a trusted NGO was acting as a gatekeeper between the government and Facebook’s mapping platform.

This raises the question of why Facebook believes this distinction is so important.

Given that the company goes to great lengths to tout how extensive its privacy safeguards are, how it leverages every major data privacy technique and how the resulting Disaster Maps platform is fully privacy-protecting, this raises the question of why Facebook believes it is absolutely imperative to have an NGO acting as gatekeeper for governments.

What is the difference between an NGO creating a map for FEMA and FEMA creating a map itself if those vaunted privacy protections are so absolute?

In particular, Facebook itself proudly touted at the program’s launch that it intended to extend its Disaster Maps to governments.

Asked why governments have not been granted direct access in the three years since the initiative’s launch, the company offered that it viewed it as imperative to have a trusted partner acting as a gatekeeper to ensure that governments could not create maps on their own.

Asked what harm it could foresee a government doing given all of the privacy safeguards Facebook had in place, the company acknowledged that despite all of its protections there was still the non-negligible potential for bad actors to surface sensitive trends no matter how many safeguards it built and thus having trusted NGOs as intermediaries protects against this.

Putting this all together, Facebook’s insular privacy review process that has been at the root of so many of its privacy stories over the past year appears to extend even to its public good efforts, with the company unable to immediately name a single outside data ethics or privacy expert or organization integrally involved in the creation of its Disaster Maps platform.

More troubling, however, is that despite all of its vaunted safeguards, the company’s concern that granting governments direct unsupervised access to its Disaster Maps platform could still potentially yield harm to vulnerable populations reminds us that even the most extensive privacy protections cannot guarantee absolute safety against bad actors.

This is especially concerning given the company’s aggressive efforts to make similarly privacy-protected intimate user data available to academics around the world to mine. Even differential privacy has very real limitations and Facebook’s privacy concerns over its Disaster Maps platform reminds us that even aggregation, noise introduction and minimum thresholding are not absolute guarantees of perfect privacy.

Given that Facebook is rushing ahead to grant academics across the world the right to mine our personal data, even with privacy safeguards, without any right for us to opt-out or request that our most intimate and personal information not be exploited by yet another researcher, our privacy has already been swept away.

At least Disaster Maps, despite giving governments a blueprint for ever more intrusive surveillance, are benefiting trusted NGOs with long histories of successfully managing extraordinarily sensitive information and engaging in life-saving first response.

In fact, by placing a layer of NGOs with deep understanding of how to preserve the privacy and safety of vulnerable populations at their most sensitive, between governments and user data, Facebook offers a powerful review model not found in its academic research initiative, which relies on university IRBs, Facebook and others primarily from the scholarly community.

Perhaps in the end, Facebook should partner with those same NGOs to act as intermediaries and gatekeepers between its data archives and the academics rushing to exploit our most intimate and sensitive data for academic glory.

At least then we would have someone on our side whose goal is to protect us as human beings, not dehumanize and exploit us as mere "data."