Podcast: Play in new window | Download
Subscribe: RSS
Date: August 31st, 2021
Reference: McLean et al. Interphysician weight bias: A cross-sectional observational survey study to guide implicit bias training in the medical workplace. AEM Sept 2021
Guest Skeptic: Dr. Corey Heitz is an emergency physician in Roanoke, Virginia. He is also the CME editor for Academic Emergency Medicine.
Case: You are working in the emergency department (ED) with the new resident, one of whom is overweight. You overhear his colleagues wonder where he went, chuckling, and one of them comments that “he probably went for second breakfast.” Realizing that these residents are making fun of their colleague’s weight, you decide to address the issue.
Background: We have talked about biases many times on the SGEM. Usually when we use the term bias it is in the context of something that systematically moves us away from the “truth”. Science does not make truth claims and the term is used as a shorthand for the best point estimate of an observed effect size.
An example in the medical literature would be selection bias. This is when subjects for a research study are not randomly selected. This can skew the results and impact the conclusions. Another example would be publication bias. Studies with “positive” results are more likely to be published while those with “negative” results are more likely to end up in the bottom of the file drawer.
There are many other types of bias in the practice of medicine. Some of my favourite ones are anchoring bias, base-rate neglect, and hindsight bias. For a description of these and many more check out Dr. Pat Croskerry list of 50 cognitive biases in medicine. You can also click on the codex for an extensive list of different biases.
This SGEM episode focuses on a kind of bias as defined by the common English language as “a particular tendency, trend, inclination, feeling, or opinion, especially one that is preconceived or unreasoned”. It is a sense of prejudice or stereotyping and the formation of a foregone conclusion independent of current evidence.
There are many biases in the house of medicine. We have discussed some of them on the SGEM. They include things like age, gender, socioeconomic status, race, and other factors. The gender pay gap is one of the topics that has been spoken about most on the SGEM. A paper by Wiler et al AEM 2019 showed females in academic emergency medicine were paid ~$12,000/year less than their male colleagues (SGEM#248).
The September 2021 issue of AEM is a special issue focusing on biases in emergency medicine. It includes articles on racial, ethnic and gender disparities. One specific topic jumped out as something that has not received much attention, weight bias. There is literature on physicians’ weight biases towards patients and patients’ weight bias towards physicians. However, there is limited information on physician-to-physician weight bias.
Clinical Question: What is the prevalence of interphysician implicit, explicit, and professional weight bias?
Reference: McLean et al. Interphysician weight bias: A cross-sectional observational survey study to guide implicit bias training in the medical workplace. AEM Sept 2021
- Population: Practicing physicians and physicians-in-training in North America
- Excluded: Those who did not consent; did not identify as physicians or physicians-in-training; or were not currently residing in North America.
- Intervention: Survey instruments measuring implicit weight bias (IWB), explicit weight bias (EWB), and professional weight bias (PWB)
- Comparison: None
- Outcome: Descriptive analyses along with correlative models
This is an SGEMHOP episode which means we have the lead author on the show. Dr. Mary McLean is an Assistant Program Director at St. John’s Riverside Hospital Emergency Medicine Residency in Yonkers, New York. She is the New York ACEP liaison for the Research and Education Committee and is a past ALL NYC EM Resident Education Fellow.
Dr. McLean was the guest skeptic on the SGEM#310 reviewing an article showing EM physicians are not great at performing the HINTS exam.
Implicit Bias:
Implicit bias is unconscious and often subtle type of bias that is hard to pinpoint in ourselves and notoriously hard to measure.
Implicit weight bias (IWB) was measured using the Implicit Association Test (IAT) based on work from Project Implicit which is a Harvard-based research organization. The weight bias IAT has been previously validated for the general population. This was adapted by adding the theme of physicians in the medical workplace. Project Implicit’s silhouette images of people with obesity was modified to add stethoscopes and clipboards, and adjust clothing to look like scrubs, white coats, or professional clothing. The good and bad layperson descriptor words were also replaced with words used to describe good and bad doctors, based on Stern’s medical professionalism framework
Explicit Bias:
Explicit bias is a more outward bias expressed in words or actions, that’s easier for us to pinpoint in other people and in ourselves
The Anti-fat Attitudes Questionnaire (Crandall et al 1994), which was originally validated for the general population was the tool used to assess explicit weight bias (EWB). It was adapted to focus on interphysician views and practices. The adapted items were kept as similar as possible to the validated original – for example, only changing the word “person” to the word “doctor” and leaving the remainder of the item unchanged, unless another tweak was absolutely necessary.
NOTE: The word “fat” as a descriptor is used in the questionnaire and to investigate explicit and professional weight bias. This word can be inflammatory, but it’s used with purpose. It’s meant to evoke an emotional response from subjects, which is necessary for this kind of research.
Physicians were asked 13 questions on a 7-point Likert scale (1- strongly agree, 2- agree, 3- somewhat agree, 4- neither agree or disagree, 5- somewhat disagree, 6- disagree and 7- strongly disagree).
Professional Bias:
Professional bias was defined as the reduced willingness to collaborate with, seek advice from, and foster mutually beneficial professional relationships with physician colleagues with obesity.
To assess professional weight bias (PWB) a new scale of explicit questions that applied specifically to the medical workplace and nuances of physician careers was created. Subjects were asked to used the same Likert scale to rate their agreement with several items. Each item was meant to capture participants’ views on physicians with obesity regarding collaboration, hiring, promotion, leadership opportunities, and other classic measures of professional success determined by group consensus within our team.
Authors’ Conclusions: “Our findings highlight the prevalence of interphysician implicit WB; the strong correlations between implicit, explicit, and professional WB; and the potential disparities faced by physicians with obesity. These results may be used to guide implicit bias training for a more inclusive medical workplace.”
Quality Checklist for Observational Study:
- Did the study address a clearly focused issue? Unsure
- Did the authors use an appropriate method to answer their question? Unsure
- Was the cohort recruited in an acceptable way? Yes
- Was the exposure accurately measured to minimize bias? Yes
- Was the outcome accurately measured to minimize bias? Yes
- Have the authors identified all-important confounding factors? Unsure
- Was the follow up of subjects complete enough? Yes
- How precise are the results? Fairly accurate
- Do you believe the results? Yes
- Can the results be applied to the local population? Unsure
- Do the results of this study fit with other available evidence? Yes
Results: Surveys were electronically sent to individuals of which 1,198 opened the document. There were 620 participants who completed the survey. The mean age was 44 years, 58% identified as female, mean BMI was 26, 73% were Caucasian, 78% emergency physicians and 72% were attending physicians.
Key Result: A high percentage of participants indicated IWB against other physicians while other results suggested some EWB and PWB does exist.
- Implicit Weight Bias (IWB):
- 87% of participants had a D-score above 0, indicating implicit weight bias against other physicians(34% demonstrated severe anti-fat weight bias and 31% moderate)
- Male and increased age were both positively correlated with anti-fat weight bias
- Explicit Weight Bias (EWB) and Professional Weight Bias (PWB):
- Ranges and means on the rating scales showed levels of variability, suggested bias does exist
- Positive correlation was seen with IWB (r=0.24 for EWB, r=0.16 for PWB)
- r=0.73 correlating EWB to PWB
- Male sex positively correlated with both EWB and PWB
1. Correlative Measurements: A lot of correlative measurements were used. Can you explain some of the differences between a D score, r value, B value, and β values?
The D-score is a standardized difference calculated from IAT response time data. It ranges from (-1) to (+1), with 0 representing neutrality. In simple terms, a positive D score means you sorted faster when pictures of physicians with obesity were paired with negative words, and slower when physicians with obesity were paired with positive words. This is interpreted as representing implicit bias, with a (+1) indicating maximal anti-fat bias. The opposite is true for negative D scores, with (-1) indicating maximal anti-thin bias.
The r value represents strength of correlations. It also ranges from (-1) to (+1), with 0 representing no association, (-1) representing maximal negative association, and (+1) representing maximal positive association. Correlations simply represent the manner and extent to which two things are related. They don’t touch on causality.
The B and β values do a better job at that. The B value is the unstandardized regression coefficient, and basically it’s the slope of the line between the predictor and dependent variables. The B value gets at causality because in our more formal statistical models, we’re specifying one (or more) variables as predictors, and other variables as outcomes. The B value can be interpreted as “for every one-unit change in the predictor, we can expect a B change in the outcome.”
Last was the β value, which is a standardized regression coefficient. It also ranges from (-1) to (+1). It’s useful when you’ve specified multiple predictors in a model because it yields the relative effect of one predictor on an outcome, compared to the others. Plainly put, if multiple predictors are showing significant relations, but one of those has a bigger standardized coefficient compared to the others, you can infer that that predictor has a larger relative influence on the outcome than the other predictors.
2. Implicit Association Test (IAT): For listeners and readers unfamiliar with the IAT, can you describe this for us?
The IAT is a fast-paced word and picture sorting game that uses the time needed to sort to make inferences about implicit bias. It’s based on the assumption that users will sort stimuli faster when the sorting rules are compatible with their associations. They’re taken through several trials in which they sort in slightly different ways. One trial may have users sort bad words to the left, and good words to the right. Another trial may have them sort average weight images to the left and overweight images to the right. Further trials flip the side of the good and bad categories. But then it gets more complicated… next, the user might be asked to good words and overweight images to the left; and bad words and average weight images to the right. Each permutation of category locations and pairs is done, so for example, overweight images will be paired with good and later bad words. Response times are used to calculate the D score, and this D-score . If the user is overall a faster or slower sorter, it usually doesn’t matter because their baseline speed is accounted for in the algorithm. And the beauty of the IAT is that honesty is not required.
If listeners are curious and want to try out the IAT, they can look up Harvard’s Project Implicit and can actually take the IAT themselves! This is an ongoing project that is continuously collecting data from the public on multiple types of bias.
3. Low r Values: Some of your r values for correlation are fairly low. Higher numbers (values closer to 1) indicate a stronger correlation. Can you explain how low r values still indicate a positive correlation between some of your findings?
That’s a good question and I’m so glad you asked! The r values between our three scales were 0.24, 0.16, and 0.73. When we look at how our factors correlate with scale results, it’s even lower. In the world of clinical and medical education research, our r values would be considered egregiously low! But here’s what I learned: measuring bias is a completely different ballgame. These seemingly small correlations are actually in line with past literature on the topic [1-3]. The cause is multifactorial, but some common reasons are participants’ psychological factors, and the methodological aspects of bias measurement [4].
Our methodologist is a social sciences researcher, and one of her favorite things to say is “humans don’t grow in petri dishes” – to elaborate, research on human subjects is especially tricky because humans are incredibly complex and nuanced. So many of the different types of biases are at play as participants answer these questions, and this is all “noise” that gets in the way of measuring the truth. It’s a harsh reality when researching people’s feelings, beliefs, and emotions – these are complex constructs with lots of “noise”!
4. Respondent Bias: Any survey literature is limited by respondent bias, that is, when respondents know what they’re being asked about, this may influence the honesty and accuracy of their answers. How did you address this limitation in particular when surveying for explicit and professional weight bias?
Another really excellent question, and it ties directly into what we were getting at with the above question regarding “noise.” This effect is amplified when you’re studying a group that has both the motivation and opportunity to control the expression of their biases. Physicians are definitely one of these groups! Respondent bias likely decreased the strength of association between IWB and EWB/PWB. We discussed this as a limitation, and we also did as much as we could to mask the topic of the survey in recruiting messages. We said it was about bias but didn’t specify the type. In terms of the timing on the survey, we considered doing the question scales first and the IAT last to avoid a priming effect from the images. But we ultimately decided against that because we anticipated many people would open the survey link on their smartphone and therefore not able to take the IAT portion. They needed a physical keyboard to do the IAT and we wanted them to know up front if it wasn’t going to work.
Another strength of this study when considering these types of bias is that we measured both implicit AND explicit biases and saw how these two types of bias related to each other. implicit bias as measured by the IAT is less susceptible to the extra noise you are getting at in this question. This mixed-methods approach really allowed us to get more directly at the constructs we were interested in.
5. Unvalidated Tool: The PWB scale was developed by your team and has not been validated. How much confidence do you have in the tool and are there plans to validate it in the future?
We are working on plans for external validity, but we don’t have anything concrete yet! One of the secondary goals of this project was to provide initial evidence for the reliability and validity of this measure, and we were able to accomplish this. It showed above adequate internal consistency (with an α of 0.92); exploratory factor analysis suggested the scale captured a single factor and that no items were unproductive or should have been removed; and it did show preliminary evidence of predictive validity, evidenced by its significant relations with other key study variables. We consider this work to be the initial validation for this new measure, we have high confidence in it, and we’re excited to be able to contribute a new tool to the field!
Is there anything else about your study you would like the SGEMers to know about that we have not asked?
Two quick things. First is that there’s argument from both the general population and healthcare professionals that stigmatizing obesity may actually prompt weight loss and healthier choices. However, weight stigma has actually been associated consistently with worse mental and physical health outcomes.
Second, our methodologist was particularly excited about the associations revealed between IWB and EWB. These types of associations have been tested in multiple studies, with mixed results. Some have found that implicit bias doesn’t associate with explicit beliefs or actions, while others have found that they are related. This has led to some interesting debate in various fields of psychology about whether implicit bias can really be considered as a predictor of explicit biases and behaviors. We found reasonable evidence here that these relations do in fact exist, and with the use of the PWB scale we were able to take it one step further and get at participants’ intentions to act on their biases (e.g., avoiding collaborating with or hiring physicians with obesity). So, we feel this study is an important to the bias literature, as well as to the EM literature.
Comment on Authors’ Conclusion Compared to SGEM Conclusion: We generally agree with the authors’ conclusions.
SGEM Bottom Line: Forms of intraphysician bias exists, including implicit, explicit, and professional bias. It is important to recognize these biases to understand how to overcome them and keep negative impacts on patient care and physician to physician relationships from occurring.
Case Resolution: You approach the group of residents and explain that you overheard their comments. You explain that making fun of or otherwise shaming their colleague threatens their relationships with him and could negatively impact their ability to work as a team to care effectively for patients.
Clinical Application: n/a
What Do I Tell the Residents: I would refer back to the SGEM Xtra on how Star Trek made me a better physician. I would tell the resident in my best Captain Kirk voice that there is no room for bigotry in the ED. You can leave your weight bias at home. In this ED it does not matter the size, shape, color, or gender of the physician. We all work together as a team, so patients get the best care.
Keener Kontest: Listen to the SGEM podcast for this weeks’ question. If you know, then send an email to thesgem@gmail.com with keener in the subject line. The first correct answer will receive a cool skeptical prize.
SGEMHOP: Now it is your turn SGEMers. What do you think of this episode on weight bias? Tweet your comments using #SGEMHOP. What questions do you have for Mary and her team? Ask them on the SGEM blog. The best social media feedback will be published in AEM.
Don’t forget those of you who are subscribers to Academic Emergency Medicine can head over to the AEM home page to get CME credit for this podcast and article.
Even if you are not a subscriber to AEM you can still claim CME credits for this episode. The content will always be free but there is a small fee for the CME. Thanks for supporting this free open access knowledge translation project.
Remember to be skeptical of anything you learn, even if you heard it on the Skeptics’ Guide to Emergency Medicine.
References:
- Cameron C, Brown-Iannuzzi J, Payne B. Sequential prim- ing measures of implicit social cognition. Pers Soc Psychol Rev. 2012;16(4):330-350.
- Fazio R. Attitudes as object-evaluation associations of varying strength. Soc Cogn. 2007;25(5):603-637.
- Hofmann W, Gawronski B, Gschwendner T, Le H, Schmitt M. A meta-analysis on the correlation between the implicit association test and explicit self-report measures. Pers Soc Psychol Bull. 2005;31(10):1369-1385.
- Gawronski B. Six lessons for a cogent science of implicit bias and its criticism. Perspect Psychol Sci. 2019;14(4):574-595.
You must be logged in to post a comment.