'Deepfake doctors' are fuelling health scams on social media, BMJ warns

ITV News speaks to Hilary Jones, one of the doctors who was imitated using a deepfake


Deepfake content of some of Britain's most recognisable television doctors is being shared on social media to sell scam products, the British Medical Journal (BMJ) has found.

Trusted names including Hilary Jones, Rangan Chatterjee and the late Michael Mosley are being used to promote products claiming to fix a range of ailments, according to journalist Chris Stokel-Walker.

Some of the videos offer solutions to health problems such as high blood pressure and diabetes, and advertise supplements such as CBD gummies.

"Deepfaking" is the use of artificial intelligence (AI) to map a digital likeness of a real-life human being onto a body that isn't theirs, giving creators the power to make fake videos of individuals.

John Cormack, a retired doctor based in Essex, has worked with the BMJ to try and capture a sense of the scale of deepfaked doctors across social media. He found many of the videos on Facebook and Instagram.

Hilary Jones, Michael Mosley and Rangan Chatterjee are among TV doctors who have been deepfaked on social media sites such as Facebook. Credit: PA

“The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way,” he said.

Dr Hilary Jones, GP and television personality, said the problem of others deepfaking his persona and misrepresenting his views seems to be worsening.

He employs a social media specialist to find the videos and immediately take them down.

“There’s been a significant increase in this kind of activity,” he said.

"Even if you do [take them down], they just pop up the next day under a different name.”


How to spot a deepfake

Although some deepfakes are convincing at first glance, there are a number of ways they can be debunked.

  • Does anything appear off?: AI may struggle to render eyes, mouths, hands and teeth. A synthetic video is easier to detect than an image, as you might notice the person's eyes are not blinking in the usual way, or their voice does not match the movements of their mouth.

  • Look for clues in small details: If the person has glasses or facial hair, does it appear authentic? If a pair of glasses has too much or not enough glare, or if the angle of the glare doesn't change when a person moves, this might be a red flag, MIT warns. Similarly, facial hair may be added by some creators - does it look real?

  • The bigger picture: If the lighting seems off, the person's posture appears unnatural, or the edges of the image are blurred, the media might be fake, antivirus firm Norton says.

  • Consider the source: Has the media been published by a reliable source? If you see a video of Rishi Sunak speaking, for example, you might expect it to be posted by Number 10's official Twitter account, or by a reputable news organisation.


An increase in videos where the identities of others are co-opted and used to create deceptive content has raised concerns elsewhere, with regards to sexual images and revenge porn, and influencing elections.

Henry Ajder, an expert on deepfake technology, said the phenomenon is an inevitable part of the AI revolution.

He said: “The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the fraud and impersonation landscape.”

A spokesperson for Meta, the social media giant behind Facebook and Instagram, said: “We will be investigating the examples highlighted by the British Medical Journal.

"We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement.

"We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action.”


Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...