BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is Internet Privacy Post-GDPR Merely A Theater?

This article is more than 4 years old.

Internet privacy in the European Union set new levels with the adoption of the GDPR or General Data Protection Regulation last year which commanded new levels of consent for internet use from harmonizing data privacy laws across Europe, to protecting and empowering all EU citizens data privacy and reshaping the way organizations across the region approach data privacy. 

In an excellent piece by Stephanie Hare, she states, “While it was an opportunity for a digital spring clean, informing people that their data is being collected is not the same as preventing it from being collected in the first place. That continues and is even increasing. The only difference is that now we are forced to participate in our own privacy violation in a grotesque game of ‘consent’.” Hare’s piece is a damning condemnation of what is a theater of privacy—where data is ostensibly protected at our say-so, but where, in fact, data is bullied out of the user through a vast array of windows to click through for which most everyone has no time to go through nor the ability to understand truly the profundity of what is being asked of them. 

Indeed, those in the EU approach blocks in their ability to access certain sites in the US and elsewhere as a mild inconvenience when in fact those in the EU are give the false narrative that their data is actually being safeguarded. Hare maintains that the GDPR is only adding to the surveillance state, not limiting its reach or impact. And I, for one, tend to agree with Hare’s assertions.

From the onslaught of legal cases around the world that approach facial and other biometric data as mere legal formalities, we must begin to ask ourselves if the GDPR and other similar laws in places like California and Vermont are being set up to entrap users into a false sense of security where data is protected a priori rather than in fact is actually being protected. 

Recent discussions have also emerged about how the internet of things (IoT) has lent to this trend of the choice between comfort or privacy, but we must wonder if this is the only choice we must be left with? Indeed, is this a false dichotomy that we can only have convenience at the expense of our privacy or a lack of services and information if we wish to protect our privacy? Or, might this be the case of a media-led bias implicating that we have only these two choices: privacy or information?

Yet we examine what is happening in countries like India with far less restrictive data privacy laws, the same problems mire users there as within the EU. The 4th Industrial Revolution (IR) is one point particularly pertinent to this discussion where internet use is characterized by new technologies that are “fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human.” Where jobs are being taken over by machines and AI, houses controlled by smart devices and apps and where our private information from online applications for credit cards or banking details is a matter of getting through a rather easily breakable firewalls, the line between our privacy and somatic lives and market interests that seek to cash in on our private information is steadily weakening as we allow laws like the GDPR to serve as placebos for actual, verifiable privacy.

Salesforce Chairman and Co-Chief Executive Officer Marc Benioff states that what is necessary to combat the problems of distrust between internet users and business is to establish what he calls a “trust revolution.” In 2016 on this subject, Benioff wrote in the World Economic Forum the of the ways in which privacy is ostensibly accounted for by businesses: 

Deploying AI will require a kind of reboot in the way companies think about privacy and security, AI is fueled by data. The more the machine learns about you, the better it can predict your needs and act on your behalf. But as data becomes the currency of our digital lives, companies must ensure the privacy and security of customer information. And, there is no trust without transparency — companies must give customers clarity on how their personal data is used.

To date, however, we are seeing that it is the private citizen who is asked to trust business and government without the former being asked to account in practical ways for the safeguarding and clarification of how information is used. In recent legal cases such as the UK’s data protection branch, the Information Commissioner’s Office (ICO), we see how last month’s statement on the recent judgment from the High Court in R (Bridges) v The Chief Constable of South Wales whereby a man’s victory over facial recognition data collection didn’t actually translate to an ultimate legal victory over LFR (live facial recognition) stating:

We found that the current combination of laws, codes and practices relating to LFR will not drive the ethical and legal approach that’s needed to truly manage the risk that this technology presents. The absence of a statutory code that speaks to the specific challenges posed by LFR will increase the likelihood of legal failures and undermine public confidence in its use.

If anything, the ICO confirms that citizens must fight tooth and nail for their privacy to be maintained and that laws need to be put into place that confirm the obligation of private citizen’s rights. Most people simply haven’t the time or the means to fight legal challenges of the sort that Edward Bridges fought against the police in Wales and the GDPR place the onus of responsibility upon the citizen to contest the overreaching abuses of surveillance and privacy breaches rather than, inversely, to place such actions within the mainstay of clear legal limitations. 

Check out my website