The problem with quotes on the internet is that it’s so hard to figure out who actually said them.Abraham Lincoln
What do Nancy Pelosi, Luke Skywalker, and Volodymyr Zelensky all have in common? Deepfakes.
If you’ve seen the Mandalorian or the Book of Boba Fett (spoiler alert if you haven’t), you know that Luke Skywalker shows up during the series. However, it isn’t Mark Hamill playing Luke. Disney hired Max Lloyd-Jones to stand in for Luke, but then deepfaked a young Mark Hamill’s face onto him. There’s a video that goes into depth about how this was done, and Disney’s Disney+ series Disney Gallery / Star Wars: The Mandalorian S2:E2: Making of Season 2 Finale also gives a good overview of the process. The effect, even if not 100% convincing, looked incredible, and when they bring Luke back later in the Book of Boba Fett, the effect looks even better (due to them hiring a Youtuber who remade the scene better than Disney did).
While the effect itself is still imperfect, it can be very convincing. While it might still be noticeable now, imagine what this technology will be like a few years down the line. Soon, these videos may become indistinguishable from real videos. This means that it may become impossible to know whether any video you’re shown is real or faked.
Last week, since the start of the war in Ukraine, we’ve seen one of the first major malicious uses of deepfakes in a public setting. A deepfake was created and released online which supposedly showed Volodymyr Zelensky, the current Ukrainian president, telling Ukrainians to lay down their weapons and surrender. This was released in the middle of a Russian invasion where Zelensky has been made the face of the Ukrainian resistance by his incredible PR team. Fortunately, the video was quickly debunked, but it brings up an important question: what effects will this technology have on our society in the future? In a world where you will be able to make a convincing video of anyone saying anything you want them to say, how can we trust any videos which surface online?
We’ve already had videos slightly altered (without deepfakes) to make Nanci Pelosi look drunk and videos taken out of context to make kids at a pro-life rally seem racist, but these older methods were still limited. You could only find ways to manipulate what people have already said, not to add your own content to them. With deepfakes, however, you have full control of the script, acting, setting, and whatever else you want to change. You can put whatever words you like into the actor’s mouth. How will this technology affect the future of digital media?
In this article, we’ll go over how deepfakes work and the dangers they hold for our future.
What is a deepfake?
Usually, a deepfake refers to a video where you take a recording of one person and swap in someone else’s face. While the technology is still imperfect, it has developed rapidly over the past few years to the point where you can still tell that a video is a deepfake, but it’s getting exponentially harder as time goes on.
To create a deepfake, people will train an AI model on photos or videos of a person (their source). This AI will use this data to draw better and better pictures of that person’s face. You will need to feed your AI a lot of images and/or videos of your source, then give it time to train its model. The more time the model has to train, the better your final image will look. It takes a lot of computing power to train your AI, hence why I’m not making a deepfake at this time.
After enough training, you take a target video (the one you want to insert your source onto) and run it through the model. Your model has been taught how to draw pictures of your source, so it will draw your source’s face over the face of the person in your target video.
Dangers of deepfakes
As this technology becomes more and more mainstream, however, I believe it will have an overall negative impact on our society. Imagine the confusion such videos could cause during a presidential campaign. What will happen as more and more deepfakes are released making candidates seem inept or incapable of being president? How will we be able to tell the difference between real and faked videos?
Deepfakes will also give celebrities and candidates plausible deniability. If an incriminating video of someone is captured, what’s to stop then from simply claiming that it’s a deepfake and that nothing ever happened? It will be impossible to tell whether a video released on social media by a “whistleblower” is legitimate or whether it’s merely part of a misinformation campaign launched by their opposition.
Even worse, what will happen when a deepfake comes out of the president announcing that he has declared war or that he has begun launching nukes at another nuclear-armed country? While hopefully these segments wouldn’t be aired on national television, the fact that they’re shared around on social media would certainly cause confusion, panic, and chaos. Combining this technology with a hack on a government’s official social media accounts could be enough to spark the other country to retaliate, resulting in mutually assured destruction. While there would probably be systems in place to prevent such an occurrence (for example, the “launched” nukes wouldn’t show up on any radars), if a country is paranoid enough, you never know what could happen.
On a separate note, what will happen when this technology is misused to manipulate stocks? All someone has to do is release a deepfake of a CEO declaring something controversial or leaking some future product. Then, the actor can buy or sell stocks in order to make a profit. In a market where even the smallest news can have dramatic impacts on stock prices, I could see this becoming a reality very soon.
This doesn’t even mention one of the most mainstream uses of deepfakes nowadays, which I will refrain from going into in the interests of keeping this article family-friendly.
In your own personal life, if you’re making a phone or video call, how will you even know that you’re talking to the correct person on the other end? How will you know your call hasn’t been intercepted and that you’re not interacting with a deepfaked version of your friend or loved one? (I’ve got a related article about this coming soon – subscribe below if you’re interested!) Some platforms (like Signal) have come up with solutions to this, but an insecure platform may be intercepted and may be able to fool people. What will happen when you receive a realistic video claiming that your friend or your child has been kidnapped, one which can show a live video of the person in a chair, crying out for help?
When people said “Don’t believe everything you read on the internet”, they weren’t kidding! Now, you won’t be able to believe anything you read, hear, or even see on the internet!
How can we as a society respond to this threat?
To be honest, I’m not completely sure. While it may be possible to point out deepfakes now, it will become harder and harder as time goes on. Eventually, we will reach the point where no video will be safe from scrutiny and where all videos will be subject to questioning. When apps like FaceApp own the right to use more than 150 million people’s faces without their permission and companies like Disney 3d scan of nearly all of their actors, organizations may even have the legal right to deepfake anyone they like without needing their permission.
We as a society need to figure out ways to determine which videos are real and which are fake. We also have to do this soon, before this whole situation spirals out of control. If we can’t figure out this threat, the information age will end only to be replaced with the misinformation age.
We do, however, need to bear in mind that this issue isn’t caused by technology itself. Only addressing deepfakes without trying to fix the underlying problem will only bring about more harm in the future. Technology, like a hammer, is merely a tool. It can be used for good or it can be used for evil; it isn’t evil in and of itself. As DeepFaceLab’s Github page points out, “You don’t need [a] deepfake detector. You need to stop lying.” The problem is not that deepfakes exist, it’s that the technology to create them is so often misused. I believe that deepfakes in and of themselves are neutral and could be used for good (though I can’t come up with any examples of how) – it’s just a question of who is using them and why.
We need people to be virtuous such that misusing this technology doesn’t ever cross their minds. Just like you have no desire to murder your best friend, we should develop citizens who would have no desire to misuse deepfake technology. As most people can say from experience, however, something like this is never going to happen without some sort of Divine intervention.
Deepfakes are becoming an increasingly important technology and its effects on society remain to be seen. Ultimately, while deepfakes aren’t inherently evil, we need to be aware of this technology and of the impacts it will have on our modern society.
My ideas and opinions aren’t law, however. I’d love to hear others’ opinions on whether or not they agree with me. We can only learn from each other through dialogue and the sharing of ideas. Feel free to leave a comment whether you think I’m right, wrong, or somewhere in between! I’m honestly uncertain – how should we respond to this threat? What can (and should) we do about it?
- Originally inspired by euronews’ article Deepfake Zelenskyy surrender video is the ‘first intentionally used’ in Ukraine war by Matthew Holroyd & Fola Olorunselu. I originally saw this posted by u/biroene on Reddit
- Linus Tech Tip’s video I can safely retire now. and Corridor Crew’s video We Made The Best Deepfake on The Internet have good demonstrations where the respective channels create their own deepfakes
- Disney Gallery / Star Wars: The Mandalorian S2:E2: Making of Season 2 Finale talks about the process of deepfaking Luke Skywalker
- DeepFaceLab‘s tutorials for some terminology
- I don’t remember where I first heard the opening joke, but I didn’t come up with it
- Javi Ramirez’s article FaceApp stole all your data – here’s how they did it and how you could have protected against them, archived on the Internet Archive