views
Election campaigns in India in recent times have seen increasing usage of technologies. While the 2014 Lok Sabha elections saw the novel use of 3D holograms by now-Prime Minister Narendra Modi, 2019 beckoned increased spending on digital media platforms and the use of tools such as WhatsApp to reach out to millions, as against traditional TV and print media campaigns run before. In 2020, the Delhi assembly elections brought about an entirely new technology implementation – artificial intelligence-driven deepfakes.
The video in question belonged to the Bharatiya Janata Party’s Delhi president, Manoj Tiwari. One of Tiwari’s older videos, where he speaks in Haryanvi following the passing of the Citizenship Act was morphed into him criticising Arvind Kejriwal in English, and pushed on Twitter. The video was part of BJP’s political agenda ahead of the Delhi assembly polls held on February 8.
While such videos are increasingly common, what’s alarming is the use of deepfake technology in producing it. The incident underlines the many discussions of how social media platforms should treat political content as, what their policies against manipulated and propaganda content should be, and the myriad areas where technology needs serious regulation. The issue at hand is even more serious for deepfakes simply because of their nature – AI deepfakes can be devilishly difficult to catch.
Understanding deepfake usage
What is important to understand is the nature in which deepfakes are used. Synthetic media, as deepfakes are referred to, use the ability of artificial intelligence algorithms to study specific face configurations, muscle movements and natural reactions to effectively replicate any posture, face or activity on other human bodies. Not only this, deepfake technology specialises in being used for specific parts of the human body, which becomes instrumental in simulating aspects such as lip movements, eye twitches and even vocal cord movements.
While this somewhat simplifies the description of a deepfake, it also aptly explains the scope of usage that it represents. In August 2018, a research project by fellows at the University of California-Berkeley published a paper called ‘Everybody Dance Now’, which showed the techniques of fabricated full body movements. To show this, they imposed the dance moves of expert dancers upon the bodies of amateurs. The results, while being discernible upon longer investigation, were convincing. While this is a seemingly casual application that demonstrates how AI deepfake technology may fetch commercial rewards in the long run, the real world implications of this can be far more serious.
The pornography threat
This threat is particularly evident in the space of pornography and adult films, and leaves essentially any average individual to the threat of being seen in an act that they may have never done, in a space they never visited, with people they never knew. One such example is of Noelle Martin, who posted her harrowing account on Elle Magazine. After being sent a link from a peer, Martin discovered hundreds of photographs and videos of herself all over the internet, framed in various sexual acts. It took far closer inspection for her to realise that the photos and videos actually included just her face, while the bodies belonged to strangers.
Martin’s case highlights how identity abuse can be the next major point of contention that deepfakes bring forth. With the ability to recreate human muscular movements, deep learning algorithms can be fed information about a particular person, even from photographs sourced from the internet. Thanks to social media platforms, the identity of most individuals and their faces are available in public domain. Once these are sourced, the algorithms in question can replicate how a particular person may move their jaws when speaking, and morph them on to the body of a different individual.
Such usage of deepfake technology is not one for the future, and is happening now. An Al Jazeera report on the matter states that nearly 96 percent of all deepfake usage is in pornography, with the faces of celebrities, and female stars in particular, being the most common target. In future, wider availability of smarter deep learning tools with graphic interfaces will offer the technology to mainstream users as well, which creates a rather significant issue at hand – one of identity theft and protection.
Politics and its implication
Political propaganda and morphing of content to suit their specific needs are old world allies. Over time, plenty of clever photo editing usage has been made to help the case for specific political parties. While some have fanned fire over alleged misconduct of individuals, others have raised communal tension. Some of the notable deepfake videos have included the much discussed clip of Nancy Pelosi’s allegedly drunken slur, which was debunked in the days following the surfacing of this clip back in 2019. There was also a clip of Facebook CEO Mark Zuckerberg that was much discussed and went viral, showcasing the executive proclaiming that he holds control of private data of billions of individuals. This was made by noted British artist and deepfake researcher, Bill Posters.
In India, the spread of misinformation on WhatsApp has raised alarm of mob lynching. Video morphing through deepfake algorithms, however, has made its official entry with Tiwari’s previously mentioned video, which was reported by Vice as the first case of deepfake usage in an election campaign. While this was officially controlled and released, it can be argued that any content owner can choose to do what they wish to, with their own content. In hindsight, using deepfake tools allowed the BJP to presumably save time, money and the entire hassle of shooting the segment anew and for Tiwari to memorise the script.
This, though, can (and most likely will) swing the other way – with deepfake used to defame politicians, and notable faces used to endorse political agenda, when the same might not hold true. While we are yet to see the use of mass deepfake morphing in a single video (i.e using deepfake to tweak the faces of an entire crowd), such aspects would also play a debilitating role in creating political discourse that can be wrongly swayed, very easily. The difficult bit here is to detect the fakes, which is what the likes of Deeptrace and other startups are trying to do.
To detect deepfakes, humans are yet again relying on AI algorithms, the very same that helped create them. This would help bodies such as the Election Commission to decide how to react and regulate such technologies, which reportedly stated that they are not sure how to regulate this and have no comment to offer, as yet. As the EC mulls over the specifics, the use of deepfakes and the power they bring can be harnessed in a million possible ways, and it is highly unlikely that its usage in the 2020 Delhi elections was an isolated case.
If anything, deepfakes are only here to stay, and warrants a far closer look at how they can be controlled and moderated.
Comments
0 comment