Podcast: Play in new window
Subscribe: RSS
Date: January 6, 2026
Guest Skeptic: Darren McKee is an author and speaker. He has served as a senior policy advisor and policy analyst for over 17 years. Darren hosts the international award-winning podcast, The Reality Check. He is also the author of an excellent, thought-provoking book called Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World (2023). The book lays out what AI is, why advanced systems could pose real risks, and what individuals and institutions can do to increase AI safety.
We have discussed AI on the SGEM a few times:
- SGEM Xtra: Rock, Robot Rock – AI for Clinical Research
- SGEM#459: Domo Arigato Misuta Roboto – Using AI to Assess the Quality of the Medical Literature
- SGEM#460: Why Do I Feel Like, Somebody’s Watching Me – CHARTWatch to Predict Clinical Deterioration
- SGEM#472: Together In Electric Dreams – Or Is It Reality?
AI already touches the emergency medicine world through triage, documentation (AI scribes), imaging, and patient communications. You argue in the book that we’re in exponential times, AI capabilities may accelerate, and that simple rules won’t reliably constrain advanced systems. All of which has implications for safety, bias, reliability, and public trust in healthcare.
The book is divided into three sections. I expanded on that so I could ask Daren questions about five different areas. Listen to the SGEM Xtra podcast to hear his responses:
Five Questions for Darren
- Origin Story & Stakes: The book’s introduction contrasts the confident historical skepticism about nuclear power with the speed with which reality overtook it. Give us a brief history of nuclear power. Then the book pivots to today’s AI and uses an analogy of humanity’s “smoke detector ” moment. Explain what that is and why you decided now was the time to write this book.
- Part I: What is Happening? In the first part of the book, you build a narrative from AI to AGI to ASuperI. Can you provide some definitions of those terms and explain why they matter? Can you walk us through how current systems (large language models and image models) work at a high level? Why did emergent capabilities surprise even their builders, and why don’t we fully understand what’s happening under the hood of these machines?
- Part II: What are the Problems? You outline six core challenges: exponential progress, uncertain timelines (and expert disagreement), the alignment problem, why simple rules (à la “Three Laws”) fail, how control erodes as tech integrates into our lives, and how all this aggregates into societal risk. We are not going to go through all six, but could you explain the alignment problem? The other topic I wanted to expand on was the Three Laws.

- Part III: What Can We Do? The last two chapters get practical and discuss what institutions can do for safe AI innovation and what individuals can do to increase AI safety. Give us your top 2 or 3 institutional moves (transparency, evaluation, guardrails). How about your top 2 to 3 personal moves that listeners can do?
- AI in the Emergency Department: Bring it home for us in the emergency department if you can. When an AI-enabled tool is proposed for triage, documentation, or image support, what are the three questions every emergency clinician or leader should ask before adoption?
The SGEM will be back next episode with a structured critical appraisal of a recent publication. Our goal is to reduce the knowledge translation (KT) window from over 10 years to less than 1 year using the power of social media. So, patients get the best care, based on the best evidence.


You must be logged in to post a comment.