Date: February 22, 2025

Guest Skeptic: Nicholas Peoples, who is a medical student at Baylor College of Medicine in Houston, Texas. Nick’s career has been an exciting blend of global health and emergency medicine. In 2015, Nick was part of the first-ever class to study at Duke University’s new campus in China, where he earned a master’s degree in global health. He went on to spend a couple of years working for medical NGOs in Nepal and Malawi before deciding he wanted to become an emergency medicine doctor. Since then, he’s been at the top of his class in medical school – earning induction into the Alpha Omega Alpha and Gold Humanism Honor Societies. He won the prestigious Schwarzman Scholarship. This past year he published as first author in The Lancet, The BMJ, JAMA, and Academic Medicine. In typical EM fashion, he spends his spare time SCUBA diving and battling a crippling caffeine addiction.

This is another SGEM Xtra. Today, we are going to take a deep dive into an essential but often overlooked topic: inaccurate citations in biomedical research. Scientific citations are the foundation of modern research, meant to weave a web of knowledge that is accurate, credible, and informative. However, a startling percentage of these citations are flawed. Inaccurate citations can misrepresent studies, propagate errors, and even shape misguided policies and guidelines. Nick and his colleagues recently highlighted this issue in their paper published in BMJ (Burden of proof: combating inaccurate citation in biomedical literature) and a related letter in The Lancet (Defensive scholarship: learning from academia’s plagiarism crisis).

Ingelfinger FJ. Seduction by Citation. NEJM 1976:

“The pages of any book, tract or article dealing with medicine are apt to be profusely sprinkled with numerical superscripts (or their equivalents) guiding the reader to a reference list. Not only does the liberal presence of such reference numbers impart an aura of scholarship, but their judicious placement after this or that assertion subtly suggests documented validity. But watch out—those little numbers may be no more than the trappings of credibility. The primary sources cited may be misquoted, inapplicable, unreliable and occasionally even imaginary.”

Nick was asked five questions about his study. Listen to the SGEM podcast to hear his answers on iTunes or Spotify. 


FIVE QUESTIONS


  1. How prevalent are inaccurate citations, and what types exist?
    • Pavlovic V et al. How accurate are citations of frequently cited papers in biomedical literature? Clin Sci (Lond). 2021 Mar 
    • Porrino JA Jr et al. Misquotation of a commonly referenced hand surgery study. J Hand Surg Am. 2008 
    • Greenberg SA. How citation distortions create unfounded authority: analysis of a citation network. BMJ. 2009 Jul
    • SGEM Xtra: Everything You Know is Wrong
    • Charla Viera.Harvard President Claudine Gay Resignation: What it means for the larger academic community? Am J Experts March 2024
    • Leung PTM et al. A 1980 Letter on the Risk of Opioid Addiction. NEJM. 2017 Jun 
  2. What are the underlying causes of inaccurate citations?

    • Authors citing papers they haven’t read fully and either citing nonexistent findings or misinterpreting the findings.
    • Copying citations from other studies rather than reading the primary source, and those citations themselves are inaccurate. This can turn into a long rabbit hole in the literature of sources citing other sources, but no evidence for the claim can be found.
    • Bias or coercion in referencing, such as through peer review.
    • Insufficient gatekeeping for miscitation.
    • The academic community does not take miscitation seriously enough. 
  3. How does the rise of AI tools like ChatGPT influence citation accuracy?

    • AI tools can fabricate sources or generate plausible sounding but inaccurate citations. I think what worries me is that I have been seeing numerous AI programs being marketed to academics / PhD students where they were developed to read and summarize papers so that researchers don’t have to do any reading at all! I think that is anti-science.

    • Efficiency is great in some areas, but in others, it is essential to take our time. You can’t learn a field by creating SparkNotes. At some point, you do have to sit down and read the extant literature, wrestle with figures and scatterplots and bar graphs, and look under the hood of a paper and make sure all the parts are in working order and pass muster.

    • I think we must be vocal about this. All it will take is one generation of scholars who grow up making AI SparkNotes out of everything, and then that will become the norm, and suddenly, there will be no value to an academic paper anymore. We will just be writing for a nonexistent audience. Only machines will be reading academic papers, and for the rest of us, our understanding will be based on what the machines show us (or don’t show us), which is another source of bias.

    • However, I also suggested that AI could also be harnessed to detect citation errors by validating source accuracy and highlighting potential discrepancies during manuscript submission.

    • It’s not reasonable to ask peer reviewers to read 40 or 80 or 150 cited manuscripts for citation errors, but if AI could go and flag the instances that have a high likelihood of being miscited and direct peer reviewers to look at those areas, that could be powerful.

  4. What strategies can reduce the rate of inaccurate citations?

    • Implementing a “Works Cited Statement,” where authors certify they have read and accurately represented all references.
    • Encouraging journals to require peer reviewers to flag suspicious citations and declare conflicts of interest related to cited works.
    • Using AI tools to verify references and citation accuracy before publication.
  5. Why does this matter for clinicians, researchers, and the public?

    • It can undermine evidence-based practice.
    • It can delay or distort scientific consensus.
    • It can send scientists down blind alleys based on misinformation
    • It can erode public trust in medical science.

Dr. Nick Peoples

“Research is challenging. Sometimes, the process stings. Although we cannot control whether our results will prove statistically significant or convenient, we can decide whether we contribute positively to growing the web of scientific knowledge—or whether we obfuscate and distort it. In an era where public trust in science is wounded by the scars of COVID-19 and strained by the uncertainty of artificial intelligence, what will our legacy be when others look back on the corpus of our work? Only when our efforts provide a balanced, accurate, and useful roadmap to other knowledge that is above reproach can we all stand, as Newton definitively put it, “on the shoulders of giants”.

Here is my call to action to everyone involved in the medical literature (scientists, peer reviewers, decision editors, journals, and clinicians). Be skeptical of citations and advocate for improved citation standards. 

The SGEM will return in the next episode with a structured critical appraisal of a recent publication. Using social media, we continue to try to reduce the knowledge translation window from over ten years to less than one year. Ultimately, we want patients to get the best care based on the best evidence.


Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.