Deepfake X-Rays Are Fooling Radiologists. Yes, Really.
Raydiac Editorial
Editorial Team · 30 March 2026
A Radiology study found that AI-generated deepfake X-rays deceived radiologists 59% of the time when they were not warned. Even when alerted, accuracy only reached 75%. The implications for diagnostic trust are massive.
Here is a sentence that should make every radiologist uncomfortable: AI-generated fake X-rays are now realistic enough to fool the people best trained to read them.
A study published in Radiology this month tested 17 radiologists from 12 centers across six countries. Half of the 264 X-ray images they reviewed were real. The other half were deepfakes, generated by ChatGPT and RoentGen, an open-source model developed by Stanford Medicine researchers.
The Numbers Are Alarming
When radiologists did not know synthetic images were in the mix, only 41% spontaneously identified the AI-generated X-rays as fake. After being explicitly told that deepfakes were present, their mean accuracy climbed to 75%, but individual performance ranged wildly from 58% to 92%.
Four multimodal large language models (GPT-4o, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick) were also tested. Their accuracy ranged from 57% to 85%. Notably, ChatGPT-4o, the very model used to create some of the fakes, could not reliably detect all of them.
Why This Matters Beyond the Lab
Lead author Dr. Mickael Tordjman from the Icahn School of Medicine at Mount Sinai did not mince words about the stakes. Fabricated fractures indistinguishable from real ones could be used in fraudulent litigation. Hackers gaining access to hospital PACS systems could inject synthetic images to manipulate diagnoses or undermine trust in the entire digital medical record.
This is not a theoretical risk. As generative AI tools become more accessible and output quality improves, the barrier to creating convincing medical deepfakes drops to nearly zero.
Experience Did Not Help
One of the more surprising findings: years of radiology experience showed no correlation with accuracy in detecting deepfakes. A 40-year veteran was no better at spotting fakes than a recent graduate. The one exception was musculoskeletal radiologists, who demonstrated significantly higher accuracy than other subspecialists, likely because bone and joint anatomy offers more telltale structural cues for detecting artifacts.
What Radiologists Should Do Now
The study highlights an urgent need for authentication tools, digital watermarking of genuine medical images, and training programs specifically designed to help radiologists identify synthetic content. Departments should also audit their PACS cybersecurity protocols, because the most dangerous deepfake is the one nobody thinks to look for.
The era of trusting what you see on a DICOM viewer, no questions asked, may already be over.
Join the Raydiac community
Connect with verified radiologists, discuss cases, and grow your practice.