NewsTechnology

Researchers Build Tool to Detect AI-Generated Radiology Reports

RE

Raydiac Editorial

Editorial Team · 13 April 2026

University at Buffalo researchers have built the first AI system designed to distinguish between radiology reports written by humans and those generated by AI, using a dataset of 14,000 report pairs.

As large language models become increasingly capable of generating medical text, a team at the University at Buffalo has built what they believe is the first AI system specifically designed to detect AI-generated radiology reports. The research addresses a growing concern: if AI can write convincing medical reports, how do we ensure authenticity and accountability in clinical documentation?

The study design

Researchers constructed a dataset of 14,000 pairs of chest X-ray reports, with each pair containing one radiologist-authored report and one AI-generated version describing the same imaging findings. The AI-generated reports were produced using multiple large language models to ensure the detection system could generalize across different generation methods.

The resulting classifier achieved high accuracy in distinguishing human-written reports from AI-generated ones, though the researchers noted that as language models improve, the detection challenge will only become harder.

Why this matters now

The concern is not hypothetical. With tools like GPT-4 and Claude readily accessible, there are documented cases of medical trainees using AI to draft radiology reports, sometimes without disclosure. In academic settings, this raises questions about training quality. In clinical settings, it raises questions about liability.

A radiology report carries legal weight. It is part of the permanent medical record, influences treatment decisions, and can be subpoenaed in malpractice cases. If a report was substantially generated by AI without physician oversight, the chain of clinical responsibility becomes unclear.

Implications for Indian practice

In India, where teleradiology companies process thousands of reports daily, the pressure to increase throughput creates a natural incentive to lean on AI drafting tools. This is not inherently problematic. AI-assisted reporting, where the model drafts and the radiologist reviews and signs, can improve efficiency without compromising quality.

The problem arises when AI-generated reports are signed without meaningful review, or when trainees submit AI-written reports as their own work. Detection tools like the one developed at UB could serve as quality assurance mechanisms, randomly auditing reports to ensure genuine physician oversight.

The bigger question

This research highlights a fundamental tension in radiology AI: the same technology that makes radiologists more efficient also makes it easier to bypass the cognitive work that training and expertise demand. The solution is not to ban AI from report writing, but to build transparent workflows where AI assistance is documented, disclosed, and auditable.

Expect regulatory bodies, including the NMC, to eventually weigh in on disclosure requirements for AI-assisted medical documentation.

TagsAIradiology reportsLLMquality assurancemedical documentationteleradiologyethics

Join the Raydiac community

Connect with verified radiologists, discuss cases, and grow your practice.

Request early access