The value of scepticism in the age of deep-fake videos
With the rise of hyper-realistic deepfakes, discerning truth becomes harder. We need to learn to be more skeptical of the content we receive and constantly question its authenticity. Its not hard to do as we have done this before.
This article was first published in The Mint. You can read the original at this link.
Last week, when trouble erupted in the North-East, someone in my friends’ circle sent our group a video that showed the police opening fire on a crowd of peaceful protesters—instantly felling two of them in cold blood. As the video played on, the policemen casually marched towards the dispersing protesters, talking loudly with callous indifference about what they had done, while the fallen bodies were picked up on stretchers and moved out of the way as if they were extras in a movie scene.
The reactions on the group were swift and extreme; the condemnation, near unanimous. Before long, parallels were being drawn between the mindlessness of the police response in the video and the relative restraint demonstrated during the Hong Kong protests. It was only when one of us remarked that they didn’t seem to be speaking Assamese that we thought to check if this video was even real. All it took was a quick check on Alt News to figure out that what we were agonizing over was an old mock drill carried out in Jharkhand that was being passed off as police firing in Assam. But by the time it reached us, the video had gone viral and our little island of disbelief was hardly going to curb its momentum.
Fake news is now an inseparable part of our uber-connected world. But as dangerous as re-purposed videos like this might be, it is still technically possible to discern the real from the fake. Fake videos are usually created by finding footage and editing it creatively, splicing content together so that its context is altered to convey something entirely different from what actually happened. Content generated this way can be fact-checked by comparing the fake videos with original footage and parsing the edits to demonstrate how the truth was manipulated. This is the job that Alt News and other fact checkers perform so admirably.
But all that is about to change.
With the increasingly widespread use of neural networks in video creation, the realism of computer generated footage has improved to the point where it is impossible to tell what’s true and what’s not. Over the last couple of years, the use of generative adversarial networks (GANs) has brought these techniques into the mainstream. There are now apps that allow users with even passable technical skills to insert faces and voices into videos of actual events in a hyper-realistic manner. Researchers at the University of Washington recently demonstrated a tool that generated a completely fake video of former US president Barack Obama showing him say things he never said, his mouth moving in sync with an actor’s behind the scenes, mouthing words in a distinct Barack Obama voice.
This ability to create hyper-real but utterly fake videos that are impossible to distinguish from the truth has come at a time when we no longer rely on trusted media companies for the news, depending instead on online social networks that allow content to reach an exponentially larger audience than that of traditional media. As a result, the information we consume today no longer passes the scrutiny of gatekeepers bound by journalistic ethics to ensure that falsehoods do not go out. Instead, we are served up content by algorithms primed to favour the content that keeps viewers engaged, resulting, more often than not, in a selection for outrage regardless of whether or not the underlying content is true. When we can no longer trust what we see and hear, truth belongs to those who can express it most outrageously.
So what do we do?
Our government seems to believe that the answer lies in identifying who created these videos in the first place and bringing them to book. With that in mind, the government is working on improving the traceability of communications across networks to identify those who originate such content. However, this is easier said than done. Our communication networks are so interconnected and diverse that tracking down perpetrators across the many platforms on which they operate is an exercise in futility.
If you cast your mind back, there was a time when a photograph was taken to be the gospel truth. It was a technology whose verisimilitude was never questioned because, by design, it captured a moment in time, preserving the present in a manner that could not be changed. We believed in the truth of photographs because we knew they could not portray anything else.
With the advent of photo manipulation software, we were exposed to technologies that, in the hands of skilled professionals, were capable of manipulating photographs to generate fabricated scenes that were impossible to distinguish from the real thing. In the early days of photo manipulation, sensationalist publications used to carry doctored images to titillate their audiences. Today, after over a decade of being exposed to manipulated images, we no longer bestow on photographs the sanctity they used to possess.
I believe that, in time, videos will be viewed with the same sense of scepticism we currently bestow on the images we are shown to convince us of a point of view. Eventually, our first reaction to an outrageous video will be to question whether or not it is a deep-fake.
It will take time for our societal consciousness to develop this instinct for scepticism. But until then, we will be buffeted by content that seems unreal but which we simply cannot tell from the truth.