BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

One Simple Thing Tech Companies Could Do To Avoid The UK Needing An Internet Regulator

Following
This article is more than 5 years old.

Yesterday, BuzzFeed reported that they had obtained details of plans being drawn up by ministers to regulate Internet companies.

It was reported that the Department for Digital, Culture, Media and Sport (DCMS) would announce later this year a regulatory framework for online “social harms”. This would create powers to force internet companies such as Facebook, Twitter, and Instagram to take down illegal hate speech within a set timeframe.

Many social media platforms such as Facebook already have their own codes of conduct as to what images they allow onto the platform.

As was recently reported by Simon Adler and Tracie Hunte on the podcast RadioLab in their episode Post No Evil, Facebook has been struggling with a universal ‘constitution’ for the last 10 years as to what is acceptable or not on its platform.

While certainly well-intentioned, the issue of a company (whether publically traded or privately owned) essentially legislating its own rulebook on freedom of speech and artistic expression outside of a transparent democratic process is a troubling one.

While clear that child abuse images or terrorist content is unacceptable, they would have been better to have invoked public debate on the subject, or seeking to set up an industry code of conduct for others to subscribe to.

It seems that with regard to the worst depravities of online content, the time is up and internet companies will be forced to meet new regulatory standards, but with regards to the pervasiveness of ‘fake news’ particularly doctored images and video, it’s likely they will escape responsibility for such content a little while longer.

I wrote about a simple potential solution to the problem of image and video doctoring in a recent article here on Forbes.

While questions of decency of content is something probably best left to regulation, and also something which needs to factor in cultural predilections, the question of authenticity of content is something that can easily be solved technically.

Websites like TinEye already offer ‘reverse image search’ services, which enables a user to visually compare an image against other similar images found online to see if features have been added or edited.

Other websites, such as www.imageedited.com analyse the ‘metadata’ and pixel composition of an image to risk-score its likelihood of having been doctored.

A simple traffic light system, where green might indicate an image that was undoctored, that’s provenance could be established, yellow to show that it was likely to have been edited, and red that it was mostly or fully computer-generated would help users differentiate between fake content, at the same time as not unduly burdening the platforms with additional curation and editorial requirements that they have so far been loath to accept.

If internet companies want to seek to avoid regulation, which they do, then now is the time to be thinking creatively about what is in their power to control, and how they can encourage others in the industry to sign up to best practice before it’s imposed on them.

Follow me on Twitter or LinkedInCheck out my website