An increasingly large proportion of video content online will be synthetic in 2021 – meaning wholly or partially generated by AI. Some of it will be benign, and even hilarious, such as the YouTubers who are using the power of AI to deepfake Nicolas Cage into every Hollywood movie ever made – but there will be plenty that have been created with wholly malicious intent.
Indeed, the first widespread use of synthetic media – non-consensual deepfake pornography, which almost exclusively targets women – has proliferated wildly since it first emerged at the end of 2017. According to Amsterdam-based cybersecurity startup Sensity, which was founded in 2018 to combat deepfakes in the visual media, the number of deepfake pornography videos online is doubling every six months, and by next summer, there will be 180,000 available to view. By 2022, that number will have reached 720,000.
In 2021, deepfakes will develop further as weapons of fraud and political propaganda, and we are already seeing examples of this. In early 2020, the Belgian Branch of Extinction Rebellion used AI to generate a fictional speech by Belgian prime minister Sophie Wilmès. To achieve this, the group took an authentic video address made by Wilmès and used machine learning to manipulate the words she spoke to their own ends. The result: Wilmès is generated in a video making a fake speech in which she claims that Covid-19 is directly linked to the “exploitation and destruction by humans of our natural environment”.
Anyone’s identity can be misappropriated in this way. All that’s needed are images, video or audio of the intended subject to “train” an AI to produce a convincing deepfake.
A world in which deepfakes flourish will also be one in which doubt can be cast on documented evidence of wrongdoing. If everything can be faked, anything can be denied too.
It is difficult to imagine a more serious challenge to the sense of an objective and shared reality that is needed to keep society cohesive. In 2021 we must begin to fight back against deepfakes by defining the problem as the corrosion of the entire information ecosystem itself. We will continue to use human fact-checkers to validate what is disseminated online, but we will also need technological solutions.
These will broadly fall into two categories: detection and provenance. The US Defense Advanced Research Projects Agency (DARPA) is already developing what it is calling its Media Forensics (MediFor) programme, which uses AI to detect and expose manipulations. Adobe’s Content Authenticity Initiative seeks to develop an industry standard for digital-content attribution.
Ultimately, however, our crisis of information will not be for technology to solve alone. Any technological solutions will be useless unless we humans are able to adapt to this new environment where fake media is commonplace. This will require “inoculation” through digital-literacy and awareness training, but it will also require proactive countermeasures, including cogent policy responses from the government, military and civil groups. It is only with the full mobilisation of society that we will be able to build an overarching resilience to withstand the risks presented by our compromised information ecosystem.