Why is there increasing concern about deepfake videos?
Natalia Jorquera puts the latest viral face-swapping app to the test
Theresa May singing Total Eclipse of the Heart. Impressionists who turn into the famous faces they're mimicking. A talking Mona Lisa. Donald Trump mashed up with Mr Bean.
It's not hard to see why deepfake videos quickly go viral - and now a new app in China has upped the ante again.
Zao allows users to turn themselves into movie stars or pop singers from a single photo in seconds.
But the rising spread of doctored footage is also being viewed as a problem.
So how do they work, where did they start and why are then seen as such a threat?
What is a deepfake?
Imagine taking Barack Obama's face and the words of an actor, and merging them to make a realistic and believable video where the former president of the United States tears into Donald Trump.
That's a deepfake.
And the real-life example of that entirely fake video was enough to fool some people into believing Obama really had slated the current Oval Office incumbent.
How are the videos made?
The deepfakes take advantage of major advances in Artificial Intelligence (AI) which have become so common there are now countless online tutorials to make your own.
Massive data files and computer models are combined to doctor audio, video and pictures.
The facial mapping works in a similar but much more detailed way to the technology used by some social media apps to redden your face or add bunny ears.
And two years after they were first seen, they're only getting more sophisticated.
How did deepfakes start?
The first deepfakes are credited to a Reddit user who posted online under the same name in 2017.
The technology gained widespread attention after the faces of prominent celebrities were deepfaked onto the bodies of porn stars - then shared online.
Major sites have now banned the content from being shared on their platforms but the growth of deepfakes has only risen.
Why are deepfakes seen as such a threat?
Because when done right they're so convincing.
Earlier this year, a TV station in the United States fired an employee after a deepfake of Donald Trump was aired.
The video purported to show the president giving a speech in his Oval Office.
Producers failed to realise Mr Trump's head was larger, a deeper shade of orange and that his tongue stuck out between sentences.
In the era of claimed fake news, the AI-based technology presents a real issue for those who consume media to have faith that what they're seeing and hearing is real.
Are those who create deepfakes being punished?
In the UK, those who distribute deepfake material can be prosecuted for harassment.
Those whose heads have been doctored onto porno footage have likened it to revenge porn.
However calls to make the distribution of deepfakes a specific crime are getting louder.
Who's calling them a threat to democracy?
The issue is gaining heat in the US ahead of the 2020 election.
At a recent House Intelligence Committee hearing on national and election security risks, the Democrat chair Adam Schiff warned deepfakes posed a threat.
"Social media companies and platforms have taken a variety of actions since 2016 to address disinformation campaigns, but I am concerned they remain unprepared and vulnerable to sophisticated and determined adversaries," he said.
What action are social media platforms taking?
Facebook is spending £5.8 million on new research to help it detect manipulated content.
The threat on the platform was underlined earlier this year when edits of the Christchurch gun attack – which was itself broadcast live on Facebook - bypassed the current checking system.
“This work will be critical for our broader efforts against manipulated media, including deepfakes,” said Guy Rosen, Facebook vice president of integrity.
“We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”
The University of Maryland, Cornell University and The University of California, Berkeley, are working to develop the new techniques that detect manipulated social media.
That includes ways to distinguish between people who unwittingly share manipulated content and those who intentionally create them.