BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Zuckerberg's Privacy Vision Is Nothing More Than Misdirection

Following
This article is more than 5 years old.

© 2018 Bloomberg Finance LP

Facebook’s founder released his latest vision statement for the social network yesterday, presenting a Utopian ideal of a privacy-first global communications platform. While some commentators lauded Zuckerberg’s vision as offering a much needed “pivot to privacy,” reading between the lines of his statement suggests there will be little change to the company’s privacy stance. Instead, his grand vision appears to be nothing more than a masterful work of marketing misdirection at a time when governments are taking an increasingly skeptical view of Facebook’s handling of user data.

At first glance, Zuckerberg’s privacy manifesto appears to lay out a number of consequential changes to the way Facebook handles user data, especially regarding access to and retention of sensitive communications and content.

Read his words more carefully and what is far more interesting is what is missing from all of his promises.

Zuckerberg’s privacy promise revolves nearly exclusively around protecting users from unauthorized or undesired external access to their communications. Whether “embarrassing” photos from “when they were younger” suddenly “resurfacing” later in life or repressive governments snooping on their communications, Zuckerberg focuses the majority of his commentary on the ways in which outside parties have or could misuse Facebook.

Largely missing, however, is any restriction on how Facebook itself will be permitted to use our data.

In building the rich behavioral profiles of its two billion users that has made Facebook so valuable, the company actually relies more on metadata than message content.

Zuckerberg makes no promise to restrict access to such metadata in yesterday’s privacy vision.

In fact, even as he promoted the idea of Snapchat-like ephemeral messaging, Zuckerberg noticeably did not promise that the metadata associated with that deleted content would be deleted as well. Remember that it was precisely such metadata that was at the heart of the NSA program unmasked by Edward Snowden.

If Facebook runs advertisements alongside ephemeral content, those advertisers will have a record that their ad was shown to a particular fine-grained demographic and Facebook will have a record of each person that saw those ads. Even after the post itself has self-destructed, Facebook’s metadata archives will contain a record that a particular person saw a given ad alongside a given piece of content and that that ad view would only have occurred if that content matched specific advertising selectors.

In other words, a user who produces and consumes only ephemeral content might at first glance appear to leave no digital trace of their interests. However, by examining the list of ads they saw over time and the interest selectors on each of those ads, a rich profile of their interests can be instantly reconstructed in the absence of the original content.

It is this advertising metadata that is noticeably absent from Zuckerberg’s promise of ephemerality.

While offering that Facebook might “limit” how long it retains “messaging metadata,” Zuckerberg makes no mention about curtailing its collection of advertising metadata.

In fact, given the company’s history of creatively defining terms, one could easily see Facebook deleting “messaging metadata” but retaining exactly the same information as “advertising metadata.”

When asked whether Facebook planned to run ads in encrypted conversations or beside ephemeral content, a company spokesperson said the company would not be commenting.

When asked whether the company would be deleting this kind of advertising metadata as it deletes “messaging metadata” or whether it would be collecting or retaining less behavioral metadata for advertising, the company similarly declined to comment.

Notably, while touting encryption as a way to protect communications in such a way that even the company itself cannot read them, Zuckerberg’s statement allows for “identif[ying] and stop[ping] bad actors across our apps by detecting patterns of activity or through other means.” When asked to clarify what he meant by this and especially what “through other means” might refer to, the company declined to comment.

Asked whether “through other means” might involve law enforcement providing the company a watchlist of individuals and Facebook using their message or advertising metadata to see who they have communicated with, how often and for how long, a spokesperson again declined to comment.

Facebook could also use advertising metadata to identify users who fall into categories deemed by law enforcement to be at-risk, creating the ultimate "precrime" system while relying only on metadata.

The company also declined to comment on how it planned to address the increasing number of countries around the world demanding encryption backdoors. In particular, how does Facebook plan to build a centralized communications platform that would permit countries across the world to decrypt the communications of their citizens while somehow protecting its users in other countries? Maintaining dozens or even hundreds of different encryption algorithms, each with a different backdoor keyed to a specific country’s intelligence service would be unmanageable to say the least.

More likely, if enough countries demanded backdoors, Facebook would face technological and legal pressure to develop a single master backdoor that it would wield on behalf of each government demanding backdoor access.

Either way, by centralizing the web’s encrypted communications in the hands of a single international company that is subject to the laws of governments across the world, it is almost a foregone conclusion that it is only a matter of time until Facebook is forced to silently build backdoors into that protection.

Again, the company declined to comment.

Given Zuckerberg’s emphasis on ephemeral messaging and the importance of being able to delete our content, it would seem the company would have readily confirmed that “deleted” messages would actually be deleted from Facebook’s servers.

However, when asked whether Facebook would truly delete self-deleting messages from its servers or whether it would merely remove them from public access but retain a copy to provide to its research staff, its collaborating researchers and external researchers through initiatives like Social Science One, the company again declined to comment. Asked whether it would permit external independent verification that it is deleting the private messages it claims to be deleting, the company again declined to comment.

This is especially noteworthy given the company’s contradictory statements regarding how it currently handles deletion of user data vis-à-vis internal and external research initiatives like Social Science One.

Putting this all together, read between the lines of Zuckerberg’s grand privacy vision and one sees nothing more than masterful misdirection, remarketing the company’s massive surveillance machine as a bulwark against government surveillance and the next Cambridge Analytica, while entrenching Facebook’s own rights to continue doing as it pleases with our data.

In the end, it seems last year’s never-ending string of privacy scandals taught Facebook absolutely nothing. Alternatively, perhaps the fact that the company's profitability and user numbers increased over that year of scandals actually did teach Facebook something existentially important: that we just don’t care about our privacy anymore.