Facebook recently announced it has banned deepfakes from its social media platforms ahead of the upcoming 2020 US presidential elections.
In a blog post, Monika Bickert, Facebook’s Vice President of Global Policy Management, explained that the ban will concern all content that “has been edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” as well as content that is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
The move came days before a US House Energy and Commerce hearing on manipulated media content, titled “Americans at Risk: Manipulation and Deception in the Digital Age.”
Twitter has also been in the process of coming up with its own deepfake policies, asking its community for help in drafting them, although nothing has come out as of yet.
But what are deepfakes? And why are social media platforms and governments so concerned about them?
Artificial Intelligence has been the hot topic of 2019 – this vast and game changing technology has opened new doors for what organisations can achieve thanks to technology. However, with all the good, such as facial recognition or automation, also came some bad.
In the decade of fake news and misinformation, there has always been a general understanding that although social media posts, clickbait websites, and text content in general, were not to be fully trusted, videos and audios were safe from the rise of deception – that is until deepfakes entered the scene.
According to Merriam-Webster, the term deepfake is “typically used to refer to a video that has been edited using an algorithm to replace the person in the original video with someone else (especially a public figure) in a way that makes the video look authentic.”
The fake in the word is pretty self-explanatory – these videos are not real. The deep comes from deep learning, a subset of artificial intelligence that utilises different layers of artificial neural networks. Specifically, deepfakes employ two sets of algorithms, one to create the video, and the second to determine if it is fake. The first learns from the second to create a perfectly unidentifiable fake video.
Although the technology behind these videos is very fascinating, the improper use of deepfakes has raised questions and concerns, and its newfound mainstream status is not to be underestimated.
The beginning of the new decade saw TikTok’s parent company ByteDance under accusations of developing a feature, referred to as “Face Swap“, using deepfakes technology. ByteDance has denied the accusations, but the possibility of such feature to become available to everyone raises concerns as to the use the general public would make of it.
The most famous example is Chinese deepfakes app Zao, which superimposes a photo of the user’s face onto a person in a video or GIF. While Zao’s mainly faced privacy issues –the first version of the user agreement stated that people who uploaded their photos surrendered intellectual property right to their face– the real concern stems from the use people will actually do of such a controversial technology if it were to become available to a wider audience. At the time, Chinese online payment system Alipay responded to fears over fraudulent use of Zao saying that the current facial swapping technology “cannot deceive [their] payment apps” – but this doesn’t mean that the technology is not evolving and couldn’t pose a threat in the future.
Another social network to make headlines in the first week of 2020 with relation to deepfakes is Snapchat – the company also decided to invest in its own deepfake technology. The social network bought deepfake maker AI Factory for US $166M and the acquisition resulted in a new Snapchat feature called “Cameos” that works in the same way deepfakes videos do – users can use their selfies to become part of a selection of videos and essentially create content that looks real, but that has never happened.
Deepfakes have been around for a while now – the most prevalent use of this technology is in pornography, which has seen a growing number of women, especially celebrities, becoming the protagonists of pornographic content without their consent. The trend started on Reddit, where pornographic deepfakes featuring the faces of actress Gal Gadot, singers Taylor Swift and Ariana Grande, amongst others, grew in popularity. Last year, deepfake pornography accounted for 96 percent of the 14678 deepfake videos online, according to a report by Amsterdam-based company Deeptrace.
The remaining four percent, although small, could be just as dangerous, and even change the global political and social landscape.
In response to Facebook’s decision to not take down the “shallowfake” (videos manipulated with basic editing tools or intentionally placed out of context) video of US House Speaker Nancy Pelosi appearing to be slurring her words, a team which included UK artist Bill Posters posted a deepfake video of Mark Zuckerberg giving an appalling speech that boasted
his “total control of billions of people’s stolen data, all their secrets, their lives, their futures.” The artists aim, they said, was to interrogate the power of new forms of computational propaganda.
Other examples of very credible deepfake videos see Barack Obama deliver a speech on the dangers of false information (the irony!), or in a much more worrying use of the technology, cybercriminals mimicking a CEO’s voice to demand a cash-transfer.
Research by WhoIsHostingThis also states that 88.8 percent of respondents said they thought deepfakes would cause more harm than good, with the majority of respondents stating the technology should be illegal without any exceptions (60% women, 38% men) – this shows a clear necessity to address deepfakes on a number of fronts to avoid them becoming a powerful tool of misinformation.
For starters, although the commodification of this technology can be frightening, it also raises people’s level of awareness, and puts them in a position to question the credibility of the videos and audio they’re watching or listening to. It is up to the watcher to check if videos are real or not, just as it is when it comes to fake news.
Moreover, the same technology that created the issue could be the answer to solving it. Last month, Facebook, in cooperation with Amazon, Microsoft and Partnership on AI, launched a competition called the “Deepfake Detection Challenge” to create automated tools, using AI technology, that can spot deepfakes. At the same time, the AI Foundation also announced they are building a deepfake detection tool for the general public.
Regulators have also started moving in the right direction to avoid the misuse of this technology. US Congress held its first hearing on deepfakes in June 2019, due to growing concerns over the impact deepfake could have on the upcoming US presidential elections; while, as in the case of Facebook and Twitter, social media platforms are under more and more pressure to take action against misinformation, which now includes deepfake videos and audios.