Forging The Narrative

To withstand scrutiny, a forgery must extend beyond the artefact in question to the entire back-story within which it resides. While it used to be that these narratives had to be concocted painstakingly, one instance at a time, modern technology makes it possible to carry out on an industrial scale.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here. If you would like to receive these articles in your inbox every week please consider subscribing by clicking on this link.


When Wolfgang Beltracchi was finally apprehended in 2010, he had been fooling the art world for nearly four decades. The secret of his success was not in creating perfect replicas of existing works of art, but in convincing buyers that what they were purchasing was real.

Beltracchi was, first and foremost, a storyteller. Even before he painted the first stroke, he concocted elaborate narratives about the work he was about to create. He focused on artists with gaps in their catalogue to ensure that what he sold was all the more plausible and created artificially aged photographs to corroborate their provenance. As a result, he could create (and sell) works of famous artists that he convinced his buyers were real. The La Forêt that he, most famously, sold wasn’t a replica of a Max Ernst painting; it was a Max Ernst that Max Ernst had never painted.

The Narrative

Forgery is most effective when not just the artefact but the entire backstory has been carefully constructed to support its authenticity. Beltrachhi did this painstakingly, one artwork at a time. Today, criminals can automate this using AI. As a result, it is now possible to spin out thousands of plausible backstories in minutes, creating networks of corroborative ‘evidence’ that can then be deployed across multiple platforms simultaneously. This new technology that has made it possible to mass produce alternate realities has given birth to new forms of criminal enterprise that are proving to be extraordinarily hard to prevent.

In December 2023, the BBC reported that an online news page called DCWeekly.org was actually a part of a coordinated opinion-influence operation that used AI to spin entirely fictional narratives about Ukrainian corruption. So successful was this deception that several of these stories were widely shared, eventually even by members of the US Congress.

Elsewhere, Global Village Space (allegedly a Pakistani news site) put out an article claiming that the psychiatrist of Israeli Prime Minister Netanyahu had died by suicide and left behind a note that implicated the PM. This ‘news’ was picked up by the official state media in Iran and then began to circulate virally on social media channels till it got enough traction that it began to feature high up in internet search results.

These AI-generated information assaults seek to insinuate propaganda into mainstream discourse in a way that it becomes so firmly entrenched as ‘truth’ in the public consciousness that it is almost impossible to dispute. This, in turn, allows bad actors to shape political narratives to suit their ends.

Elsewhere, AI is being used to infiltrate corporate networks by constructing fake profiles of candidates who are just perfect for jobs that need to be performed remotely. In these instances, AI is used to generate an ideal resume—one that is perfectly suited for the given role. This includes AI-generated images of the person, purpose-built websites of his achievements and fake LinkedIn profiles. Once the candidate is shortlisted for a virtual interview, face-filter technology is used to make the person playing the role of the candidate match the images that were used to create the fake persona. Once recruited, these fake employees can penetrate the corporate network and conduct espionage, steal intellectual property or install malware.

Taking Advantage of Digital

These sorts of deception operations are just the tip of the iceberg. As criminals better understand how digital technologies actually work, they are able to uncover new ways in which to conduct increasingly sophisticated crimes. When augmented by AI, these criminal activities can be carried out on an industrial scale and can result in tremendous financial losses.

In September 2024, Michael Smith was arrested for orchestrating an elaborate scheme that allowed him to earn over $10 million in royalty payments from music streaming platforms that streamed songs he had created using AI. Instead of simply using AI to generate fake songs, he created fictional bands (with names like ‘Caliente Bloom’ and ‘Calvinistic Dust’) and created a streaming profile for all of them. He then created thousands of accounts on various streaming platforms and created an army of bots to continuously stream these songs he had created and pocketed the royalty income from each stream.

The reason this scheme went undetected was that Smith had taken care to deploy his bots in a way that did not arouse suspicion. Had he streamed a single song a billion times, it would have immediately raised red flags. Instead, he spread a billion fake streams over tens of thousands of different songs, making his scam much harder to detect. To do this, he turned to AI, creating up to a thousand songs a week that he spread over a range of different streaming services so that they could be fraudulently streamed. In the end, he managed to generate over 650,000 streams per day and collected annual royalties of over $1.2 million.

Beltracchi’s genius was in creating forgeries that not only looked real but stood up to a rigorous investigation of their antecedents. In order to do that, he had to craft plausible histories for each of his counterfeits. Today’s digital criminals can generate new realities on an industrial scale. Not only has this eroded our collective ability to distinguish fact from fiction, it has also spawned new genres of criminal enterprise that our law enforcement agencies are struggling to come to terms with.

And this is only going to get worse.