BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Deep Fakes Are Merely Today's Photoshopped Scientific Images

Following
This article is more than 4 years old.

As society increasingly wrestles with the impact of “deep fakes” it is important to acknowledge that much of the current hype is overstated. While it is true that deep fakes constitute an important future issue, today’s tools are far less developed or user-friendly than the breathless media hype and public fears might suggest. Most importantly, in many ways the rise of deep fakes mirrors the rise of Photoshop nearly 30 years ago, with a rise in falsified images met with increasing scrutiny of image-based evidence. While deep learning approaches promise higher-quality falsifications with greater ease, if Photoshop did not lead us to abandon our trust in images, do deep fakes really represent the threat we believe they do?

Since as long as there has been film, there have been efforts to manipulate it. Long before the digital revolution, governments “airbrushed” inconvenient figures out of their photographs and movies.

The introduction of Photoshop almost 30 years ago fundamentally transformed this landscape, democratizing image manipulation by placing easy-to-use high-quality editing tools in the hands of every computer user.

While there were digital manipulation tools prior to Photoshop, they lacked its point-and-click ease of use and its advanced editing features that made it relatively straightforward to alter or refine photographs.

Scientists were quick to discover the potential of Photoshop to maliciously edit their scientific imagery, adjusting it to reflect the findings they wanted. In 1989-1990, just 5% of research misconduct cases opened by the US Department of Health and Human Services’ Office of Research Integrity involved manipulated images. By 2007-2008 that number stood at 68%.

In fact, plotting the number of ORI cases involving questionable images by year, the rise of digitally manipulated images almost perfectly aligns with the widespread introduction of Photoshop.

Yet despite Photoshop making scientific image manipulation a point-and-click affair, the scientific world has not abandoned imagery. Instead, it has adopted more extensive and rigorous review of published imagery, requiring additional evidence and data to back submitted images. It has also invested heavily in automated filtering and encouraged the peer review process to continue after publication through scholars searching papers looking for suspect images. While such efforts cannot flag every manipulated image, they can over time ferret out many of the most obvious cases.

What about deep fakes?

Contrary to the public’s and press’ imaginations, today’s deep fake creation software is far from Photoshop’s simplicity and accuracy. The technology to manipulate or entirely falsify imagery and video using deep learning techniques can yield uncannily accurate results, but is still extremely limited and typically yields results that are readily distinguished. However, as the technology improves and becomes more widespread, it is almost certain to reach a point where modifications are not readily visible to the human eye.

Yet this is little different from the Photoshop manipulation techniques used in scientific papers, in which subtle changes can be extremely difficult to spot and only surface when compared against large numbers of images or using sophisticated comparisons that take into account how the results presented in one image should look in a related image in the same paper if the two are indeed based on the same genuine sample.

Discovering forgeries in scientific papers is a laborious process and today still involves considerable human insight despite the use of automated prefiltering.

So too will efforts to combat deep fakes as they evolve.

Much as there is no magical tool that can instantly identify all manipulated scientific imagery, despite decades of work, it is unlikely that any single tool will be able to magically verify videos as being free of deep fakery. Instead, just as Photoshopped images must be identified through a combination of filtering algorithms and human inspection and research, so too will deep fakes be identified through subtle algorithmic artifacts of specific implementations and non-digital research to confirm details such as subtle characteristics of the purported background and presentation.

The biggest difference between scientific imagery and deep fakes is that the scientific world operates on a relatively slow timescale, with peer review able to take the time it needs to identify falsified imagery. In contrast, deep fakes are likely to circulate on social media in which a single falsified video posted hours before an election could throw the entire outcome long before reviewers are even aware it exists. The challenge will be the likely distribution of deep fakes through social media and the associated rapid velocity with which such content may spread, placing a special emphasis on approaches that can debunk video quickly.

In the end, far from a hopeless world of undetectable falsification, the rise of deep fakes is likely to mirror the rise of Photoshopped scientific imagery: troublesome, but with an ecosystem of automated and human approaches rising to counter it.