AACR warns of unchecked AI in academic publishing: a call for transparency and accountability

View profile for Ali Fatemi

Director and Chief Medical Physicist at CHS | Founder, CEO at SpinTecx.LLC

🔎 Unchecked AI in Academic Publishing: The AACR’s Warning Sign The American Association for Cancer Research (AACR) recently published sobering findings: in 2024, 23% of abstracts and 5% of peer reviews submitted to its journals contained text likely generated by large language models (LLMs), yet less than a quarter of authors disclosed that AI was used, even though disclosure is mandatory. What tools did they use? A detector created by Pangram Labs, trained on millions of human-written documents, plus synthetic “AI mirror” texts. It achieves ~99.85% accuracy (very low false positive rates). Why this matters: • Transparency isn’t just academic correctness—it’s essential for trust, reproducibility, and accountability. • Misused or undisclosed AI can introduce errors, especially when methods are paraphrased or rephrased without care. • There are equity dimensions: non-native English researchers are using LLMs more often. Proper support + disclosure matters. What we should do moving forward: 1. Enforce disclosure rules rigorously (journals, conferences, reviewers). 2. Integrate AI-detection tools in the submission/review pipeline. 3. Educate researchers on what AI can’t do: respecting nuance, domain-specific correctness, context. 4. Standardize policies across publishers so the expectations are clear. 🔍 My take: If we don’t adapt oversight and norms, we risk eroding core academic values. AI is a tool, not a substitute—but we need to treat it with full transparency. 😶🌫️ What do you think: Should journals issue retroactive notices or corrections when undisclosed AI use is found? How should authors safely use LLMs without risking misrepresentation? —- Ali Fatemi, Ph.D., MCCPM, DABMP Director of Medical Physics, Merit Health (CHS) Southeast Professor of Physics, Jackson State University, USA Adjunct Professor, Dept of Physics, Université Laval, Canada Founder & CEO, SpinTecx http://www.spintecx.com #AcademicIntegrity #AIethics #Publishing #LLMs #ResearchPolicy #MedicalPhysics #AIinHealthcare #MedTech #HealthTech #DigitalHealth #SmartHealthcare #FutureOfRadiology #ClinicalAI #ClinicalInnovation #MedicalImaging #Radiology #VendorNeutral #GlobalHealthTech #Standardization #PrecisionMedicine #HealthcareLeadership #PatientSafety #StartWithWhy #HealthcareStartups #VentureCapital #AngelInvesting #VCFunding #StartupFunding #TechStartups #HealthcareAI #AcademicEntrepreneur #ClinicalEntrepreneurship #ClinicalTranslation #AIInRadiology

Ali Najafi

💠In the name of super efficient God Medical physicist | Radiotherapist | Al & Python developer | Review & systematic researcher

1w

👏👏

this is an important conversation. i think transparency is key in maintaining trust, especially in research. enforcing disclosure and using detection tools sounds like a solid start. as for retroactive corrections, that might depend on the severity of the impact. it’s a tricky balance but we need clarity in how we integrate AI. how do you see authors navigating this landscape without crossing ethical lines?

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories