New AI tools can create fake X-rays so realistic that even expert doctors can’t tell them apart from real ones. A major medical study found that radiologists with decades of experience could only spot the fakes 41% of the time.
This opens the door to a scary new type of fraud. Someone could generate fake X-rays showing broken bones or injuries that never happened, then use them to scam insurance companies or file fake lawsuits. It’s essentially undetectable medical fraud at the click of a button.
Even Warnings Don’t Help Much
The study tested 17 medical specialists from six countries using 264 X-rays – half real, half AI-generated using tools like ChatGPT and Stanford’s RoentGen model. When doctors had no idea fakes were mixed in, they spotted them correctly less than half the time.
Even when researchers warned doctors that fake X-rays were definitely in the batch, accuracy only jumped to 75%. One doctor caught just 58% of the fakes, while the best managed 92%. Surprisingly, having 40 years of experience didn’t help much.
The AI tools themselves aren’t any better at catching fakes. When researchers asked systems like GPT-4 and Google’s Gemini to identify fake X-rays, they scored between 57% and 85% accuracy.
What’s Next
This is just the beginning. As AI gets better at creating fake medical images, the fraud possibilities will expand beyond X-rays to MRIs, CT scans, and other medical tests. Hospitals are now worried about hackers injecting fake images into their systems to manipulate patient diagnoses.




