Why Medical Deepfakes are the New Public Health Crisis
By Arunima Rajan
Deepfake technology is increasingly sophisticated and accessible, making it a growing menace,
The WhatsApp message arrived at 10pm: “There's a breakthrough diabetes cure,” wrote her 78-year-old mother, sharing what appeared to be a video of renowned Indian cardiac surgeon Dr Devi Shetty. “I've been searching everywhere for more information.”
Mridula* sighed as she clicked on the Facebook link and recognised the telltale signs – the slightly unnatural movements, the inconsistent speech patterns. Her mother had been taken in by an AI-generated deepfake, one of countless such videos now storming social media platforms targeting vulnerable elderly users.
The video, showing a purported Dr Shetty announcing a big leap in diabetes treatment, was entirely fabricated using artificial intelligence technology. It is part of a spreading rash of fake cures and research findings that exploits the trust in healthcare professionals and ever-advancing AI technology.
Legal Battles against Misinformation
Given the profusion of such content, distinguishing the real from the false becomes an uphill battle. In a significant development last week, the Delhi High Court intervened in a similar high-profile medical deepfake case, ordering the removal of AI-generated videos featuring prominent cardiothoracic surgeon Dr Naresh Trehan from social media platforms.
The fraudulent content, which had by then garnered over 1.1 million views and 6,400 likes, showed Dr Trehan advocating natural remedies for urological conditions – a clear anomaly given that urology is not his speciality. The videos originated from a suspicious Facebook page called 'Maria Ideas', traced back to Kyiv, Ukraine.
Dr Trehan, Chairman and Managing Director of Medanta, Gurugram and the hospital's parent company, Global Health Limited, sought legal recourse through an injunction suit. Their petition detailed how advanced AI use, including image manipulation techniques and voice synthesis, was used to create convincing but bogus medical advice. The suit specifically highlighted the targeted nature of the content, which focused on sensitive male health problems such as prostatitis and erectile dysfunction – conditions that fall squarely within urologists' domain, not cardiac surgeons'.
The legal action broke new ground by seeking damages not just for harm to reputation but also for infringing on personality rights and trademark violations. This marks a crucial precedent in India's emerging battle against spurious news, where deepfake technology is increasingly being weaponised to exploit gullible patients.
The cases of Dr Trehan and Dr Shetty represent a disturbing pattern: trusted medical professionals being digitally cloned to peddle unverified treatments, particularly targeting aged users through popular platforms such as Facebook and WhatsApp. These instances underscore the urgent need for both legal frameworks and digital literacy initiatives to combat this scourge of dangerous content.
This menace is not restricted to India. Across the world, deepfake technology has infiltrated healthcare, targeting even the most trusted figures. According to a report in The BMJ, Dr Hilary Jones — one of the UK’s most well-known doctors, known for his decades-long television presence — has been the subject of a slick digital hoax.
A deepfake video circulating on social media depicted Dr Jones endorsing a cure for high blood pressure. The video convinced many by exploiting the trust Dr Jones evoked among audiences.
According to the Merriam-Webster dictionary, a deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that they neither did nor said.
Dr Sanjay Kalra, Consultant, Bharti Hospital, Karnal and chairperson, Education Working Group, International Society of Endocrinology, points out that such doctored news poses significant risks for two primary reasons. First, it leads individuals to make uninformed choices and opt for inappropriate treatments. Second, it hinders patients from choosing the right medical specialists and getting evidence-based treatment on time. “Just as justice delayed is justice denied, the same principle applies to healthcare: delayed access to appropriate treatment equates to denial of optimal care,” he explains.
He notes that there are several examples of misinformation and deepfakes relating to diabetes and weight management. For instance, videos often circulate featuring individuals or unverified sources discouraging the use of insulin or established obesity treatments. Such claims are baseless and lack scientific validation, creating confusion and mistrust among patients.
Kalra notes that online abuse and trolling of even genuine information are unfortunate realities, especially for healthcare professionals. “Having experienced this personally, I believe it is essential to develop resilience and remain committed to sharing accurate information. If criticism arises while promoting evidence-based practices, staying focused on advocating for what is correct is crucial. Repeatedly emphasising critical health messages, such as the need to manage diabetes and obesity to prevent cardiovascular complications, helps reinforce the importance of such issues. Collective efforts among healthcare professionals to maintain a unified narrative are key to fostering trust and credibility,” he explains.
To ensure one is getting authentic information, it is advisable to rely on websites with domain extensions such as .org, .gov and.in, he says. These domains are generally associated with non-commercial entities that prioritise evidence-based and scientifically sound information.
But what makes deepfakes and medical misinformation particularly dangerous in today’s digital age?
Prof. Balaraman Ravindran, who heads the Department of Data Science and Artificial Intelligence (DSAI), IIT Madras, says that technology advancements have enabled the mala fide manipulations to get better at deceiving humans. Not only that, they are fairly accessible to non-experts and experts alike. What’s more, the amount of content on the Internet and hard-to-scrutinise social media platforms make corroborating and verifying every piece of information and its source a formidable task.
Deepfakes or forged content may carry some sign of reliability, such as a familiar logo or symbol that is enough to instill good faith. Often these markers overshadow any visible deepfake artifacts. Artifacts are flaws or distortions that occur when the image is created.
How can individuals identify a deepfake video or fake medical advice online?
“Deepfake videos and audio often have a few discernible artifacts such as not-in-sync audio and lip movement, distortion near the corner of the mouth, eyes, nose and hairline, and audio with improper pauses between words and sudden changes in tone, intensity, and voice, or no proper voice modulation,” says Ravindran. One can and should immediately corroborate medical facts with reliable online sources followed by professional healthcare experts' advice.
“There should be more deep fake detection tools for common people to use and fact-check,” he adds. “Similarly, more creation and adoption of browser-based extensions, APIs and integration of such tools with social media platforms will be of help.”
Role of Policy-Makers
Deepfakes and misinformation in healthcare present a complex challenge as efforts to combat them can conflict with freedom of speech. Raising awareness and improving digital literacy are solutions, empowering individuals to recognise altered content without relying on censorship. However, problems such as unequal access to technology and limited digital skills remain significant issues.
“Policymakers should support research on misinformation and encourage collaboration between communities, healthcare providers and technology companies. It is also important to address the root causes of misinformation and its spread, such as confusion between credible and unreliable sources. Social media platforms can help by highlighting trustworthy information while allowing space for scientific discussions and new findings. Encouraging collaboration with public health authorities and keeping fact-checking efforts updated are key steps to mitigating misinformation while safeguarding free speech,” Ravindran says.
Interestingly, deepfakes can be beneficial to healthcare too. Medical education and training, drug development and medical testing are some areas that can gain from this technology. That, of course, comes with its own challenges, such as ethics and data safety, to name just a few.
*Name changed to protect the individual’s privacy.