Is seeing still believing? Deep fakes say differently.

Oscar Okwero
12 min readNov 2, 2021
Courtesy@CBS.com

Introduction

Novel digital technologies make it increasingly difficult to distinguish between real and fake media. One of the most recent developments contributing to the problem is the emergence of deepfakes which are hyper-realistic videos that apply artificial intelligence (AI) to depict someone say and do things that never happened. Coupled with the reach and speed of social media, convincing deepfakes can quickly reach millions of people and have negative impacts on our society. Given the ease in obtaining and spreading misinformation through social media platforms, it is increasingly hard to know what to trust, which results in harmful consequences for informed decision making, among other things . Deepfakes in the future will likely be more and more used for revenge porn, bullying, fake video evidence in courts, political sabotage, terrorist propaganda, blackmail, market manipulation, and fake news .

How they work

The advancement in camera technology, wide availability of cell phones and increasing popularity of social networks (Facebook, Twitter, WhatsApp, Instagram and Snap Chat) and video sharing portals (YouTube and Vemeo) have made the creation, editing and propagation of digital videos more convenient than ever. This has also brought forth digital tampering of videos as an effective way to propagate falsified information. This ‘infopocalypse’ pushes people to think they cannot trust any information unless it comes from their social networks, including family members, close friends or relatives, and supports the opinions they already hold. In fact, many people are open to anything that confirms their existing views even if they suspect it may be fake.

Deepfakes rely on generative adversarial networks (GANs). The process involves feeding footage of two people into a deep learning algorithm to train it to swap faces. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger creates fakes until the other ML model can’t detect the forgery. The larger the set of training data, the easier it is for the forger to create a believable deepfakes. These can then be used to mimic a person’s facial expressions, mannerisms, voice, and inflections. The first attempt of deepfake creation was FakeApp, developed by a Reddit user using auto encoder-decoder pairing structure. In that method, the auto encoder extracts latent features of face images and the decoder is used to reconstruct the face images. These strategies enables the common encoder to find and learn the similarity between two sets of face images, which are relatively unchallenging because faces normally have similar features such as eyes, nose, mouth positions.

Threats posed by Deep fakes

As public figures such as celebrities and politicians may have a large number of videos and images available online, they are initial targets of deepfakes. They can be abused to cause political or religious tensions between countries, to fool public and affect results in election campaigns, or create chaos in financial markets by creating fake news. It can be even used to generate fake satellite images of the Earth to contain objects that do not really exist to confuse military analysts, e.g., creating a fake bridge across a river although there is no such a bridge in reality. This can mislead a troop who has been guided to cross the bridge in a battle.

In enterprises Deepfakes are also increasingly being deployed by fraudsters for the purpose of conducting market and stock manipulation, and various other financial crimes. For example, a voice deepfake was used to scam a CEO out of $243,000. Tech analyst firm Forrester predicts that deepfakes could end up costing businesses a lot of money next year: as much as $250m. Symantec has reported three successful audio attacks on private companies that involved a call from the “CEO” to a senior financial officer requesting an urgent money transfer. Just imagine how an incident like this would affect your company. Various political players, including political agitators, hacktivists, terrorists, and foreign states can use deepfakes in disinformation campaigns to manipulate public opinion and undermine confidence in a given country’s institutions. Research has severally demonstrated how effective these can be;

1. In a 2018 deepfake video, Donald Trump offered advice to the people of Belgium about climate change. The video was created by a Belgian political party “spa” in order to attract people to sign an online petition calling on the Belgian government to take more urgent climate action. The video provoked outrage about the American president meddling in a foreign country with Belgium’s climate policy.

2. While these are examples of limited political influencing, other deepfakes may have more lasting impact. In Central Africa in 2018, a deepfake of Gabon’s long-unseen president Ali Bongo, who was believed in poor health or dead, was cited as the trigger for an unsuccessful coup by the Gabonese military.

3) In June 2019, a high-quality deepfake by two British artists featuring Facebook CEO Mark Zuckerberg racked up millions of views. The video falsely portrays Zuckerberg giving respect to Spectre, a fictional evil organization from the James Bond series that teaches him how to take total control of billions of peoples’ confidential data, and thus own their future.

Positive effects

In his article titled” Deep fakes are a net positive for humanity” on Forbes, Simon chandler states; the underlying machine learning technology of deepfakes will have beneficial impacts on other areas. In medicine, for instance, UCL AI professor Geraint Rees predicts that “the development of deep generative models raises new possibilities in healthcare.” One such possibility is the use of deep learning to synthesise realistic data that will help researchers to develop new ways of treating diseases without usual actual patient data.

The film industry can benefit from deepfake technology in multiple ways. For example, it can help in making digital voices for actors who lost theirs due to disease, or for updating film footage instead of reshooting it. Deepfake technology also allows for automatic and realistic voice dubbing for movies in any language thus allowing diverse audiences to better enjoy films and educational media. A 2019 global malaria awareness campaign featuring David Beckham broke down language barriers through an educational ad that used visual and voice-altering technology to make him appear multilingual.

Similarly, the technology can have positive uses in the social and medical fields. Deepfakes can help people deal with the loss of loved ones by digitally bringing a deceased friend “back to life”, and thereby potentially aiding a grieving loved one to say goodbye to her. A Scottish company, CereProc, was able to train its own deepfake algorithms on audio recordings of former president John F. Kennedy. By training its deepfake technology in this way, the company was able to create ‘lost’ audio of the speech JFK was due to give in Dallas on November 22, 1963, the day he was murdered. Last year, the Illinois Holocaust Museum and Education Centre had a showcase where the holographic images of 15 Holocaust survivors were shown on rotation. Visitors had the chance to ask their questions to the holographs of survivors. The interviews with the survivors were recorded by a sphere of cameras. Each interview took 5 days to shoot. This incredibly smart technology doesn’t stop there; it is also possible to use GANs to see how we are going to look as we age. That’s right; artificial intelligence can now even show us how we will look in 50 years — preparing us for what is yet to come!

With the use of AI and GANs, developers are now able to generate an image of a human figure of any size, body shape, and skin colour and predict how particular clothing item would fit on a person of that physique. You can even add accessories to the image to see if your favourite type of shoes or handbag would look good with the outfit choice. An obvious potential use is being able to quickly try on clothes online; the technology not only allows people to create digital clones of themselves and have these personal avatars travel with them across e-stores, but also to try on a bridal gown or suit in digital form and then virtually experience a wedding venue.

Detection & prevention efforts

Detecting deepfakes is a hard problem. They don’t have any obvious or consistent signatures, so media forensic experts must sometimes rely on subtle cues that are hard for deepfakes to mimic. Tell-tale signs of a deepfake include abnormalities in the subject’s breathing, pulse, or blinking. A normal person, for instance, typically blinks more often when they are talking than when they are not. The subjects in authentic videos follow these patterns, whereas in deepfakes they don’t. Amateurish deepfakes can, of course, be detected by the naked eye. Other signs that machines can spot include a lack of eye blinking or shadows that look wrong. GANs that generate deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes — if we can, in fact, detect them at all.

The use of deep learning such as CNN and GAN has made swapped face images more challenging for forensics models as it can preserve pose, facial expression and lighting of the photographs . Early attempts at detection were based on handcrafted features obtained from artefacts and inconsistencies of the fake video synthesis process. Recent methods, on the other hand, applied deep learning to automatically extract salient and discriminative features to detect deepfakes. While traditional media forensic methods based on cues at the signal level (e.g, sensor noise, CFA interpolation and double JPEG compression), physical level (e.g, lighting condition, shadow and reflection) or semantic level (e.g, consistency of meta-data) can be applied for this purpose, the AI generated fake face videos pose challenges to these techniques and accountability of social networks.

With the obvious dangers of the deep fakes technology to be used by nefarious players to cause disinformation, confusion, unrest and public opinion manipulation, There has been a concerted concern on how best to tackle the inherent risks of the technology to cause harm. The added fact that they are easy to create but the technology to detect them is not yet mature enough to be reliable is an added urgent need for stakeholders to give priority to other controls to both detect and prevent their creation and spread. At the heart of this effort is to protect truth and disapprove falsehood as actually being false. Several approaches have been suggested that include; legislation, user training, operational controls like fact checking and lastly technology interventions.

Training of users who are the main targets of social media campaigns using fake media as well as enterprise users should be a priority for both Enterprises and governments. Users must be trained to know that all media can be faked and that you can no longer just trust any audio, video or pictures that purport to pass a particular message. Independent cyber security expert Rod Soto views deepfakes as the next level of social engineering attacks. “Deepfakes, either in video or audio form, go far beyond the simple email link, well-crafted SMS/text, or a phone call that many criminals use to abuse people’s trust and mislead them into harmful actions,” Soto said. “They can indeed extend the way social engineering techniques are employed.” It is crucial for organizations to enhance enterprise security by ensuring that employees learn the lingo and understand cutting-edge social engineering methods. And the enterprise isn’t limited to awareness as the sole prevention strategy. “While awareness always works, when facing these types of threats, it is necessary to develop anti-deepfake protocols that can provide users and employees with tools to detect or mitigate these types of attacks,” he said.

Legal controls have also been proposed as potential controls to the risk of deep fakes as it crosses the line on personal privacy, copyright, Intellectual property, Identity theft as well as public safety and health. Are deep fakes legal? It’s a thorny question, and unresolved. Across the world countries have passed freedom of expression acts. In the US there’s the First Amendment to consider, but then intellectual property law, privacy law, plus the new revenge porn statutes many states across the United States have enacted of late. The First Amendment protects the right of a politician to lie to people. It protects the right to publish wrong information, by accident or on purpose. The marketplace of ideas is meant to sort the truth from falsehood, not a government censor, or a de facto censor enforcing arbitrary terms of service on a social media platform.

Earlier this year, the EU published a strategy for tackling disinformation, which includes relevant guidelines for defending against deepfakes. Across all forms of disinformation, the guidelines emphasize the need for public engagement that would make it easier for people to tell where a given piece of information has come from, how it was produced, and whether it is trustworthy. The EU strategy also calls for the creation of an independent European network of fact-checkers to help analyse the sources and processes of content creation. In the United States, lawmakers from both parties and both chambers of Congress have voiced concerns about deepfakes. Most recently, Representatives Adam Schiff and Stephanie Murphy as well as former representative Carlos Curbelo wrote a letter asking the director of national intelligence to find out how foreign governments, intelligence agencies, and individuals could use deepfakes to harm U.S. interests and how they might be stopped. One legislative option could be to walk back social media firms’ legal immunity from the content their users post, thus making not only users but also the platforms more responsible for posted material which was already overridden by president Donald Trump via an executive order following a feud with Twitter over a factually misleading tweet.

The major technology firms as well as government agencies have been actively trying to develop reliable detection techniques. The United States Defence Advanced Research Projects Agency (DARPA) initiated a research scheme in media forensics (named Media Forensics or MediFor) to accelerate the development of fake digital visual media detection methods. Recently, Facebook Inc. teaming up with Microsoft Corp and the Partnership on AI coalition have launched the Deepfake Detection Challenge to catalyse more research and development in detecting and preventing deepfakes from being used to mislead viewers. A recent report by Deep trace, an Amsterdam-based start-up which aims to counter deepfakes, identified 14,678 deepfake videos online, the overwhelming majority of which were porn. In absolute terms this is still a relatively low number to train AI algorithms on.

To help address the problem of a lack of training data, Facebook, Google, Amazon Web Services and Microsoft recently came together to announce the Deepfake Detection Challenge. The Challenge, which is due to launch, next month, will release a specially-created dataset of deepfakes using paid actors for researchers around the world to use as training data for their models. Developing effective deepfake detection systems is obviously in the public good, but it’s not entirely an act of altruism by the tech giants, who are likely to be on the front lines of enforcing legislation like the Anti-Deepfake Bill and therefore have a strong incentive to find practical detection mechanisms. Presently, many social media companies do not remove disputed content, rather they down rank it to make it more difficult to find, by being less prominent in users’ news feeds. Recently Twitter marked out President Trump’s tweet as misleading and glorifying violence in the wake of nationwide riots in relation to the racial killing of a black unarmed man in Minneapolis. Facebook cuts off any content identified as false or misleading by third-party fact-checkers from running ads and making money; the company works with over 50 fact-checking organizations, academics, experts, and policymakers to find new solution. Among news media companies, Wall Street Journal and Reuters have formed corporate teams to help and train their reporters to identify fake content, and to adopt detection techniques and tools such as cross-referencing location on Google maps and reverse image searching.

Conclusion

In conclusion to this article, anyone can download deepfake software and create convincing fake videos in their spare time. So far, deepfakes have been limited to amateur hobbyists putting celebrities’ faces on porn stars’ bodies and making politicians say funny things. However, it would be just as easy to create a deepfake of an emergency alert warning an attack was imminent, or destroy someone’s marriage with a fake sex video, or disrupt a close election by dropping a fake video or audio recording of one of the candidates’ days before voting starts. As the technology improves and becomes more commonplace, what’s stopping anyone from claiming that what they definitively said was the result of a deepfake? For example, what if an enterprise was victimized by a major data breach? and at first the CEO is honest about the attack and then decides to claim they were the victim of a deepfake? Which story would customers believe? This concept has been discussed in legal circles and is referred to as the “liar’s dividend.” If anyone can claim that what they said is the result of a deepfake, how do we distinguish the truth anymore? Steve Grobman, chief technology officer at cyber security firm McAfee, and Celeste Fralick, chief data scientist, warned in a keynote speech at the RSA security conference in San Francisco that the tech has reached the point where you can barely tell with the naked eye whether a video is fake or real.

Marco Rubio, the Republican senator from Florida and 2016 presidential candidate, called deep fakes the modern equivalent of nuclear weapons he told an audience in Washington a couple weeks ago and stated that they had become the biggest existential threat to American democracy, “As dangerous as nuclear bombs? I don’t think so,” Tim Hwang, director of the Ethics and Governance of AI Initiative at the Berkman-Klein Centre and MIT Media Lab, tells CSO. “I think that certainly the demonstrations that we’ve seen are disturbing. I think they’re concerning and they raise a lot of questions, but I’m sceptical they change the game in a way that a lot of people are suggesting.” Seeing believes the old saying has it, but the truth is that believing is seeing: Human beings seek out information that supports what they want to believe and ignore the rest.

--

--

Oscar Okwero

Cyber Security | AI | Data protection | Food | Liverpool FC |