Liars Dividend
There is widespread consternation around the impact that deep-fakes are going to have on all of society this year. But most legislative counter-measures are oriented towards shooting the messenger. We need a different path. Thankfully we have been here before.
This article was first published in The Mint. You can read the original at this link.
A couple of weeks ago, a video featuring cricket legend Sachin Tendulkar started circulating on social media. In it, he was talking up a mobile game called Skyward Aviator Quest, marvelling at how his daughter was able to make ₹180,000 on it every day and pointing out how amazed he was that this was possible using an app that was essentially free.
While it soon becomes obvious that the video is fake—the words he says don’t always match the movement of his lips, and, given Sachin’s carefully curated persona, these are not the sort of things anyone would expect him to talk about—I can see how a casual viewer might get taken in.
But Tendulkar is only the latest celebrity to have unwittingly starred in a video he did not consent to make. A similar fate has befallen others — cricketer Virat Kohli, actor Shah Rukh Khan, journalist Ravish Kumar and Infosys founder N.R. Narayana Murthy. Last year, south Indian actor Rashmika Mandanna and British-Indian influencer Zara Patel had to suffer the ignominy of having their faces swapped in viral video clips that had clocked over 20 million views. Even Prime Minister Narendra Modi spoke of a video that, he said, featured what seemed like him dancing the Garba.
Easy to Create
Deepfakes are not new. But thanks to rapid advances in generative artificial intelligence (AI), they have over the past year or so become so much easier to create. What was once a niche capability only available to teams with access to massive training data sets and advanced programming capability can now be generated by you and me using any one of a number of off-the-shelf AI services. What even just a year ago was an expensive exercise that called for specialized hardware and considerable technical expertise can now be executed in an hour after looking up a few simple YouTube tutorials.
The real worry, of course, is the effect that all this will have on society. Given how easy it is to generate videos that portray political candidates in an unflattering light, it seems inevitable that we will see them deployed at scale during elections — both by political opponents as well as unfriendly countries that will have no problem deploying teams of hackers to destabilize their enemies. With over half the world voting in an election this year, there is a serious concern around the effect that deepfake proliferation could have on democracy.
Legislative Counter-Measures
In anticipation, various countries around the world have already begun developing legislative counter-measures. In India, the ministry of electronics and information technology has said it will soon release new regulations aimed at ensuring that the social media platforms through which these videos are disseminated implement appropriate measures to proactively identify and take them down before they spread. But just getting platforms to combat the spread of fake videos more effectively amounts to shooting the messenger. If we want a truly effective solution, we have to get to the heart of the problem—we must find a way to strike at the source from which these videos are generated.
This is easier said than done. It is with every passing day becoming easier to create believable videos with highly accessible technology. We have already reached a point where all that stops you from creating a deepfake that is indistinguishable from the real thing is your imagination. And perhaps your conscience.
So what is the solution?
We have been here before…
When they were first invented, photographs were believed to be incontrovertible. They were mechanical representations of reality and as such trusted to be irrefutable evidence. But, in time, darkroom technicians realized that it was possible to manipulate photographs so the truth could be creatively distorted. Using processing techniques like dodging and burning and elaborate workflows such as double exposures, they were able to create photographs that deviated from reality. And then, once image manipulation software like Photoshop and GIMP became available, nothing was sacred any more.
Today, we no longer trust photographs the way we used to. We have learnt to identify tell-tale signs of manipulation, such as artefacts in the image and barely perceptible ghosts surrounding objects that have been cut and pasted into the frame. So we have something to go by while checking if an image has been tampered with. As a result, when presented with an image that portrays someone in an unusual light, our instinct is to question its veracity because we know how easy it is to manipulate.
The Skeptic’s Solution
I believe that we will inevitably extend the same level of mistrust to the videos we are shown. When presented with a clip of someone saying or doing something out of character, rather than blindly believing the evidence of our eyes, we will wonder whether it is fake. This to me is the only way we can even hope to combat the avalanche of deepfakes that is coming our way. In situations like this, our only inoculation against believable falsehood is healthy scepticism.
But my real worry is what happens after we reach this point. When we doubt all the video evidence we are presented with, anyone who is caught on tape doing something wrong will be able to dismiss the evidence of wrongdoing by claiming it is just another deepfake. This is what law professors Bobby Chesney and Danielle Citron call the Liar’s Dividend. It marks a point in time when evidence can be so easily falsified that nothing can be relied upon to serve as legitimate evidence of wrongdoing.
And this is when our real problems will start.