Can you believe your eyes? How to spot deepfakes
By Elaine McCallig and Connor Parker, ITV News
Deepfakes are creeping into our society more and more, and with AI advancing every day spotting them is becoming increasingly difficult.
Recently Katy Perry's own mother commented on her daughter's dress at the Met Gala, but the popstar did not attend the show, her mother had seen a deepfake of her.
Perry posted on Instagram a conversation with her mother where she said the "AI got you too, BEWARE!"
Images of Perry and Rihanna, both of whom did not attend the Met Gala, flooded the internet at the same time pictures of other guests' outfits were being shared.
Many were fooled and it was only after keen-eyed observers and fans who knew the guest list pointed out that neither of the superstars were there.
Perry's warning to her mother is truer now more than ever, recent months have seen several prominent fake pictures convince many internet users.
People were quick to believe a fake photo of Donald Trump snarling at police officers as they tackle him to the ground during an arrest was real
People were also desperate to believe the pope would appear so nonchalant Pope Francis wrapped up in a giant white Balenciaga puffer jacket.
Deepfakes are consructed from "deep learning" artificial intelligence, which digitally alters an image, video, or other piece of media.
While the ability to manipulate photos and create fake images isn’t new, AI imaging tools by Midjourney, artistic image generator DALL-E, and others, have made widespread use easier.
They can quickly generate unique images with little more than a simple text prompt from users.
How to spot a deepfake
Although some deepfakes are convincing at first glance, there are a number of ways they can be debunked.
Does anything appear off?: AI may struggle to render eyes, mouths, hands and teeth. A synthetic video is easier to detect than an image, as you might notice the person's eyes are not blinking in the usual way, or their voice does not match the movements of their mouth.
Look for clues in small details: If the person has glasses or facial hair, does it appear authentic? If a pair of glasses has too much or not enough glare, or if the angle of the glare doesn't change when a person moves, this might be a red flag, MIT warns. Similarly, facial hair may be added by some creators - does it look real?
The bigger picture: If the lighting seems off, the person's posture appears unnatural, or the edges of the image are blurred, the media might be fake, antivirus firm Norton says.
Consider the source: Has the media been published by a reliable source? If you see a video of Rishi Sunak speaking, for example, you might expect it to be posted by Number 10's official Twitter account, or by a reputable news organisation.
As the technology becomes more sophisticated, however, it can be increasingly difficult to rely on visual clues.
“Some of the things that have been tells are starting to go away because these tools are getting so much better at creating compelling content,” Andrew Lewis, a doctoral researcher at the University of Oxford Centre for Experimental Social Science told ITV News.
How easy is it to create a deepfake? ITV News' Natalia Jorquera previously put an app released in 2019 to the test
While reimagining the Pope's sartorial choices might appear harmless, access to such technology has raised significant concerns about misinformation and revenge pornography.
Some 96% of the almost 15,000 deepfake videos on the internet in 2019 were non-censual pornographic videos, according to a report published that year by Deeptrace.
All of the subjects of the deepfake videos found in the top five deepfake pornography websites were women, the report found.
Among those who have been targeted is actor Kristen Bell, who was horrified to discover her face had been edited onto sex workers' bodies.
Popular Twitch streamers and people with no following or fame have also been victimised.
'You're seeing yourself doing things personally I would never do'
ITV News' Sam Leader spoke to streamer Sweet Anita last month about the impact deepfake pornography has had on her
Deepfakes can also be used to create media that is either intentionally or unintentionally misleading - further muddying fact and fiction in an era of rife misinformation and disinformation.
Barack Obama, for example, did not refer to Trump as a "total and complete dip****" in a viral 2018 video, and former Prime Minister Boris Johnson did not tell us to vote for Jeremy Corbyn ahead of the 2019 general election.
Nor did Corbyn endorse Johnson, either, perhaps unsurprisingly.
Is this video convincing? It is a deepfake created by research organisation Future Advocacy to highlight 'the dangers of deepfakes for democracy' ahead of the general election
Eliot Higgins, the founder of investigative journalism group Bellingcat, was the creator behind the fake Trump arrest photos.
“The Trump arrest image was really just casually showing both how good and bad Midjourney was at rendering real scenes,” Higgins wrote in an email.
“The images started to form a sort of narrative as I plugged in prompts to Midjourney, so I strung them along into a narrative, and decided to finish off the story.” He pointed out the images are far from perfect: in some, Trump is seen, oddly, wearing a police utility belt. In others, faces and hands are clearly distorted.
But it’s not enough that some deepfake creators clearly state in their posts that the images are AI-generated and solely for entertainment, an expert warns. “You’re just seeing an image, and once you see something, you cannot unsee it,” said Shirin Anlen, a media technologist at Witness, a New York-based human rights organisation that focuses on visual evidence.
Even when an explicit content warning is given, it appears it doesn't always help viewers distinguish between fact and fabrication.
Researchers found only one in five British adults were able to correctly identify a deepfake, even when they are warned they would be presented with synthetic media, a study conducted by researchers from the University of Oxford , Brown University , and The Royal Society found.
Lewis, a lead author on the report, told ITV News: "What we see in our research is that rather than really improving people’s ability to spot a deepfake, content warnings seem to be making people more sceptical about video content in general, irrespective of whether it’s real or fake.”
Warnings for potentially misleading content on social media can only be a good thing, Lewis said.
But whether it can help signpost misinformation isn't fully clear as the research suggests that these warnings alone don't necessarily improve people's ability to detect synthetic media with the naked eye.
If presented with a warning, he said, people might either trust the social media platform to tell them it is fake, or they will try to take a closer look for themselves.
“Even with content warnings we’re observing that people’s manual detection abilities are not so strong, so we think the takeaway is that fostering trust in moderators’ judgement is going to be essential because people are not able to just look a bit more closely and make the determination for themselves.”
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know.
Manipulated media can also be re-appropriated for more nefarious reasons - such as for the creation of propaganda.
Higgins points to a piece of fakery posted by the Russian ministry of foreign affairs.
The clip purportedly shows Ukrainian troops harassing a woman. It was found that the video was actually filmed in a Russian-controlled area of Ukraine after the clip's geolocation was probed.
"What's really key here, and what's so difficult to fake, is geolocation," Higgins tweeted.
There are benefits to this type of technology, however.
Deepfakes have been used consensually for creative campaigns, allowing companies more flexibility to personalise the message depending on the audience.
A likeness of football legend David Beckham, for example, was used by the charity Malaria No More to call on world leaders to take action to defeat malaria in nine different languages.
Figures from the past can also be digitally reanimated.
Museums can use the technology to give visitors a more immersive experience, such as at The Dali Museum in Florida, where you can be greeted by the late surrealist, Salvador Dali, himself.
'I do not believe in my death': Dali, who died in 1989, has been digitally brought back to life at The Dali Museum
Deceased loved ones, with the permission of the families, can also be digitally resurrected.
"You look beautiful, just like when you were a little girl," a deepfake of Kim Kardashian's father said in a 2020 video.
Robert Kardashian, who rose to worldwide fame as OJ Simpson's defence attorney, died in 2003.
The video was a birthday gift from the socialite's then-husband Kanye West.
As synthetic images become ever more sophisticated - and increasingly difficult to discern from the real deal - the best way to combat visual misinformation is better public awareness and education, experts say.
Like any piece of technology, it all depends on whether it's used for good, or for bad.
But he says that making people aware that this technology not only exists - but will be prominent in years to come - is a net positive as it highlights that the online space has shifted, Lewis said.
“The positive cases will come, it’s a question of whether we can control the negative ones."
What did Boris Johnson really know about Downing Street’s notorious parties? With fresh revelations from our sources, in their own words, listen to the definitive behind-closed-doors story of one of the biggest scandals of our era.