Why X’s think they’re the best

Commenter Alex pointed out this excellent post, Why Doctors Think They’re the Best, by Scott Alexander, who writes:

Ninety percent of drivers think they’re above-average drivers, ninety percent of professors think they’re above-average professors etc. The relevant studies are paywalled, so I don’t know if I [Alexander] should trust them. . . . But I am pretty sure ninety percent of doctors believe they’re above-average doctors. Here are some traps I’ve noticed myself falling into that might help explain why:

1. Your patients’ last doctor was worse than you. . . .
2. Your patients love you. . . .
3. Patients often come to you, but never leave you. . . .
4. You’ve probably successfully treated most of your patients. . . .
5. You know what you know, but you don’t know what you don’t know. . . .
6. Your victories belong to you, your failures belong to Nature. . . .
7. You do a good job satisfying your own values. . . .

Good stuff. And this makes me wonder: To what extent do these biases hold for other groups? Do doctors think they’re the best more than other groups think they’re the best? I expect there’s lots of relevant research on this topic in the judgment and decision making literature.

66 thoughts on “Why X’s think they’re the best

  1. This sounds like human nature, and I reckon it applies across all groups, statisticians and scientists as well. But one thing is for sure, most everyone else is a worse driver than I am.

    • As a long term commuting and utility cyclist, I find the term “good driver” risible though I will admit that some drivers (~ 1%?) are not half-blind psychotics or sociopaths.
      Motor Mania

      I suspect that some of the Dunning-Kruger Effect research may apply here though that is a scary thought with-in a profession.

      • Haha yes, and as a long time road bicycle racer, I have always felt sure that most automobile drivers would not have survived the move from Cat 5 (aka “Crash 5’s”) to Cat 4 had they chosen to be bike racers;-)

  2. I see this issue as threaded; you can believe you’re better by adding points to your score without adjusting anyone else, or you can downward adjust the others (and thus maybe or maybe not need to lift your own score). Since there’s threading, you get more outcomes possible, so it adds dimensions. That means you can downgrade the average relative to you in some areas, while increasing your score in others. That function gets complex fast, and people are clearly not good at straightening out mutiple logical threading possibilities, so they ballpark. That step involves projecting the past forward – very Zorn – and leads to some interesting observations.

    Example from my life. My car started to get hit while parked. My mirror was knocked off and left on the windshield. Someone banged the passenger doors. Someone banged the driver’s side rear. I didn’t hit anyone; they hit me while I was parked legally. But I thought about how I drive and realized that I had developed a habit of leaving extra room where possible: I never parked in tight spaces and, where possible, I would angle my car away from anyone else. It seemed logical, until I realized that a minor difference in care could cause these accidents, and that I could be encouraging less care by leaving more room. In the heads of the other drivers, that might even signal the opposite of my intent; they might see it the way people treat a person who pulls up their mask 20 feet away, as somehow unclean because the act of pulling up the mask an entirely safe distance away indicates a kind of asshole recklessness. The roots of the behavor look entirely different depending on perspective. I realized that parking in tighter places meant people took the time to keep the car straight. This might attract safer parkers, who then also might be the very people seeing extra room as a sign that I cant be trusted. There are causal chains extending along both sides of the interaction. Thus, I could easily conclude that I am a better than average parker except that being better than average is actually a negative because that attract worse or less careful or maybe even angry parkers who hit my car. And thus I could conclude that I’m a terrible parker because I did not understand the basic social rules.

    Costco ranks cashiers. They first look at transactions per hour. That is why they switched to having more people at the registers to move the product back into a cart; more transactions paid for the people. But they also look at individual transactions because it’s not fair to penalize a station for having some difficult customers. Managers have historically located cashiers who can handle the unusual where the unusual might occur due to store layout. More dimensions to evaluate, more ways to rank.

    How many dimensions go into self-evaluation? A class on red figure vases is not going to be as popular as the sex lives of popes. Some subjects are abstruse. The scholarship in some fields is more removed from what is taught than in other fields. And so on.

    • Jonathan:

      Organizations often socialize people into their culture via feeding them plentiful helpings of their elite status and specialness simply by virtue of having been chosen. Your personal analogy for the individual level processing of specialness can be turned slightly to reveal how organizations treat people who they think do not fit. If we replace you with a minority applicant who may not have the exact education or come from an area or school where most of the employees do, they then begin the process of smashing into their car to send the message. In your example, you figured this out over days, weeks, months, or years. In a typical work place example, the unliked or unwanted employee is on their way out or has been essentially sidelined within a short period of time.

    • Wish I could figure out what’s going on with my computer that results in unexpected posting…

      at any rate…

      I think it would necessarily depend on how you define groups. Even assuming that you mean employment groups specifically, I would highly suspect that culture/national identity have an interaction effect. For example, there is supposedly a nationality signal in the DK effect.

  3. Andrew:

    Why do you and Scott Alexander believe that it is irrational (or inconsistent) for most people to believe that they are better than average? I see no inconsistency. For example, on average, a driver in the U.S. will be in three to four collisions in there lifetime. I have not been involved in a collision, and my bet is that most people have been involved in fewer than the average number of collisions. The average number of collisions is probably skewed by a few very bad drivers. This is probably true of many fields of competence. There are a bunch of boneheads and then there is the rest of us. Probably, the vast majority of surgeons don’t leave medical equipment inside patients, but some do, and therefore, there is a mean rate of error that is much greater than the mode.

    It is a common trope, like the Lake Wobegon “where all the kids are above average.” It’s funny, but if you think about it, probably all of the kids (or most) are above average with the group average being dominated by on cluster of kids that have been totally neglected.

    • OK, but if they understood the difference, I suspect most doctors (and drivers, and parents) would think they were well above the median as well.

      • “if they understood the difference, I suspect most doctors (and drivers, and parents) would think they were well above the median as well.”

        Based on what? Ambiguous survey questions get asked and then we interpret the respondents’ answers in one particular way to make them look like fools to make us feel like the truly intelligent. I suspect that if you give a clear question like “the median surgeon has lost X number of patients during an operation” that the surgeon responding would know he had done better or worse than that number, and would answer accordingly. Maybe some would lie. But, all of these “better than average” questions are just based on ambiguity, and do not tell me anything about whether people are deceiving themselves about their own competence. First, you have to define competence with some kind of metric and then ask people how they would do on that measure of competence.

    • Steve:

      It’s an interesting question. I’m looking again at Alexander’s statement that he is pretty sure that “ninety percent of doctors believe they’re above-average doctors.”

      Why is he so sure of this? I’m skeptical. I bet lots of doctors think they’re not so great! I wonder if there’s selection bias. Maybe the doctors who Alexander encounters are better than the average doctor.

      • Or is internal belief and external show very different, leading to the perception from outsiders (or fellow docs) that 90% of docs believe they’re above average when it’s actually 90% of docs talk/act like they are above average? After all, who wants a surgeon who is just average?

        • I want an average surgeon. A “top” surgeon is either an ass who can’t explain what’s going on to me or too inexperienced to have encountered a variety of complexities and complications. Much of any surgery is the pre-op decision making and the postop follow-up. You want someone who can work with you for all of it.

          Its just like choosing a PhD mentor — if you choose the “best” at getting published their ethics might be suspect (autofilling soup bowls) or they might not be able to actually teach and mentor. You want a well rounded mentor who can provide the support you need when you need it.

        • I have often had the thought that one may not want the most talented athlete as a coach as they may not have the experience and empathy to understand the needs of a less gifted athlete.

          I want someone who “understands” the game not necessarily one who excels in it.

        • I guess that depends on your definition of “top” and “average”. When it comes to ones life and limb, I reckon (but am not sure) that most people want the best, whatever their definition of ‘best’ happens to be.

          But that actually wasn’t my point. This Alexander fellow says “But I am pretty sure ninety percent of doctors believe they’re above-average doctors.” So he is making a statement about his belief of what others believe. How might he gain information on what others believe about themselves? Possibly one way is by observing the way they talk or act. If people talk or act in a confident or sure manner, perhaps one would get the impression that they believe they are pretty good at what they’re doing. It’s probably a good idea to be confident if you do something risky, so maybe that is part of it (and hence my surgeon comment). Who knows.

          The point being that, there seems to be any number of pitfalls that would misinform one’s beliefs about what others believe about themselves. How on earth would you really know?

      • Andrew:

        Agreed with selection bias. My father was a chief of social work in the VA. He always talked about how the VA ended up with a lot of the dregs of the medical community. I think the medical-complex keeps screening out bad doctors until the worst eventually end up treating in prisons or some other place where they essentially can’t create big liability problems for their facilities. Alexander probably doesn’t know any of those folks.

        • Brings up a couple of anecdotes:

          1. I once read somewhere that the majority of psychiatrists were at the bottom of their med school class. The reason: Almost no one wanted to go into psychiatry, so students at the bottom of the last were rejected by their preferred specialities for residents, and ended up in psychiatry because that was the only area that didn’t reject them.

          2. My brother-in-law’s first choice for specialty was family practice, but he didn’t make the cut for it, so he went with his second choice, pediatrics. After a few years, he decided to extend his practice to include young adults — namely, parents of his child patients, because so many of them asked if he could be their physician as well as their children’s.

        • AS my dentist (who is also a doctor, prof, entreprenuer) likes to say: What do you call the person who graduated last in their class at Med School? “Doctor”!

      • I am a doctor and I know personally and professionally other doctors both in private practice, at universities, and the VA. I’m pretty sure that 90% of doctors DON’T believe that they’re above average. I don’t think that I am.

        Doctors are quickly cured of the illusion that they’re great in the first few months of medical school when they see many other top-notch students doing better than them. Its clearly apparent to the other students who has the ego and thinks (erroneously) that they are the best — they might be good at one thing but have large failings in other areas. These people are avoided by other students.

        More importantly, there is not one single skill or ranking for doctors (or any other profession.) Some of my patients like me (I have the survey results to show it) but I don’t get the best responses. I try to be well rounded but I’m not as good of a surgeon as some others, I’m not as good at listening as some others, and I’m not as good at teaching as some others. Any small amount of self-reflection should make that clear.

        If you find a doctor who tells you that they are the best or better than average in everything I’d find someone else (just as if you find an egotistical professor or mechanic or police officer.) I tell my patients that I’m experienced, compassionate, and will work with them. I keep my uncertainty and insecurities to myself but don’t let my ego and hope blind me from referring a patient to someone else who might be better for them.

        • So you’re the best at being the most well rounded. See how silly this stuff gets. It’s like trying to squeeze mercury between your fingers.

        • “but don’t let my ego and hope blind me from referring a patient to someone else who might be better for them.”

          I think this is one of the most important characteristics of a good physician!

      • I wonder if part of it could be the fact that someone is more likely to become a doctor if they are above average on traits that are constantly salient throughout their education. Maybe being above average in elementary school, middle school, high school, and college leads to the straightforward assumption that “well, I’ve been above average on things that mattered to me in the past, so I’m probably above average as a doctor as well”.

        Kind of a deflationary hypothesis, but it could account for some of the variation.

    • If you define quality/competence by rare-ish or skewed events, then this would be right. And maybe that’s how people evaluate themselves when they answer surveys. But a more encompassing definition of quality, where average-to-poor drivers who have luckily never been in an accident have the same “rating” as average-to-poor drivers who have unluckily been in a couple accidents, would be more normally distributed.

    • Your observation reminds me of the “friendship paradox”: the vast majority of people in a social network have fewer friends than average. It’s because there are a few “influencers” who have a massive number of connections and act as hubs for the other network nodes. Perhaps the small number of bad doctors have lots of patients due to high turnover. Their ex-patients disperse fairly evenly to other doctors, and if those doctors are bad they disperse again, until they reach a median-to-good doctor and stay. These patients then tell their new, perhaps only mediocre doctors how great they are, in the patient’s own experience, and because doctors assume their patients come evenly from all sorts of doctors, they rationally believe they are better.

      This would fix the problem of median vs mean: doctors have an inflated view of the proportion of bad doctors, even after accounting for the skewed distribution of doctor quality, because the few bad doctors are over-represented in the distribution of unique patients and under-represented in the distribution of long-term patients.

      • *”doctors…rationally believe they are better” given their false assumptions about the distribution of their patients’ previous doctors being relatively unskewed.

    • Steve: I think your claim is mistaken, at least for any field in which mistakes incur liability. One doc can’t injure a bunch of people and drive up the avg because his liability cost will drive him out of business

      Of course OTOH there’s no liability cost for bad economic forecasts, so economists are safe.

      • Jim said,
        “One doc can’t injure a bunch of people and drive up the avg because his liability cost will drive him out of business”

        I’m not convinced the injurious physician is always driven out of business. It may depend on how severe the injuries are, how many injured patients really realize they have been injured, how “good” the physician’s liability insurance is, etc.

        • “I’m not convinced the injurious physician is always driven out of business. ”

          I’m am. If the problem is discovered and it was serious, it won’t happen again because the person won’t be employable or insurable.

          Again, of course, exception academia.

        • Jim said, ” If the problem is discovered and it was serious, it won’t happen again because the person won’t be employable or insurable.”

          There’s a big “if” in your statement. For example, a patient may be aware of a problem, but not know where or how to report it — or may find the reporting process too onerous or intimidating to proceed with.

  4. I recall from social psych that people in general tend to be overly optimistic, to the extent that survey answers by people suffering from depression tend to be more accurate because their pessimism cancels out the innate optimism.

    The doc seems to be describing the classic fundamental attribution error. Then again, research shows that the less a person knows, the more they overestimate their expertise, so you get it coming and going: bad doctors overestimate how good they are, average doctors attribute their failures to circumstance, and good doctors accurately assess that they are better than other doctors. I guess the other 10% are depressed. :/

    • What you are describing is a floor effect being interpreted as something real. Of course someone who is lower on a scale being used to assess knowledge is more likely to overestimate, because there is more of the scale left to use.

      • James –

        > Of course someone who is lower on a scale being used to assess knowledge is more likely to overestimate

        You offer that as some kind of mathematical inevitability.

        Yes there is evidence supporting the DK effect, but some groups who have lower ability are more likely to underestimate their own abilities relative to those of their peers.

        There are cultural and psychological overlays. For example, a sense of one’s own agency can play a role. If a student who generally scores poorly on math tests studies particularly hard for a particular test and does well, they might be likely to attribute their success to luck rather than their hard study, because they don’t really believe they have the ability to do math well.

        • I don’t think it is a mathematical certainty, but if the scale is truncated at the top, then showing a bigger effect at the low end of the scale may simply be an artifact of the scale. It makes proving a DK effect difficult.

        • Perhaps related –

          Dan Kahan likes to argue that people with more information (or “smarter” people) are more likely to be “motivated” in their reasoning. I don’t happen to think that is a general attribute so much as context-related (issues like climate change, for example).

      • What I’m saying is that there’s some fallacy or heuristic that will explain overestimation of one’s skill level, whether one is in reality low, average or high.

        As for what’s “real,” not every scale has a floor effect, you’d have to look at the particular scale before attributing a phenomenon to that. If I gave people a scale from 1 to 100 (essentially a percentile rank), with the instruction that the average doctor is ranked 50th, then there would be equal space above and below. And if the scale has a practical interpretation (like “I can lift X pounds”) then I suspect a ceiling effect would be more of a concern (I can certainly lift one pound, but I get more skeptical as the scale increases).

      • From the doc’s note: “Your victories belong to you, your failures belong to Nature.” The FAE suggests that doctors would believe they succeed because of their skills and fail because of situational factors.

        “Your patients’ last doctor was worse than you.” The FAE suggests a doctor would attribute a patient’s previous doctor failing to provide successful treatment as a failing of the doctor’s skill rather than circumstances.

        By analogy to the example given in the wikipedia article you linked, Alice thinks Bob’s heart patient died in surgery because he’s a bad surgeon, but she thinks her heart patient died in surgery because the heart was just too far gone.

    • Michael said,
      “survey answers by people suffering from depression tend to be more accurate because their pessimism cancels out the innate optimism.”

      I have heard this called “depressive realism”.

  5. I’m always skeptical of studies demonstrating that people are even dumber than anyone thought. In many cases, people’s intuitions turn out to be surprisingly accurate and many studies showing the depths of human stupidity have sneaky little “tricks” hidden in the procedures that generate the desired pattern of data without actually testing the hypothesis they claim to be testing. For example, the idea that “most people think they’re above average on X” is only absurd if X has a symmetrical distribution. If X is skewed, then by definition most people are above (or below) the mean.

    In general, the problem with using vague terms like “skillfulness” is that they can be reasonably interpreted several ways. So it isn’t actually feasible to compare responses to a real distribution. For example, I can be “more skillful” at driving than someone else because I’m able to perform parallel parking perfectly or swerve to miss a deer on the highway without losing control, yet I get tickets all the time for choosing to speed on the freeway. Someone else might interpret “skillfulness” to mean driving without getting tickets. So… the studies that reveal the most about this kind of phenomenon will be those that can tie “skillfulness” to a concrete, measurable variable.

    Fortunately, it doesn’t appear that the authors of the original paper to which Scott refers were being sneaky. That is, they asked university students to compare themselves to the other university students in the room. Surely university students are above-average drivers, so its good that the authors didn’t ask them to compare themselves to the general population. However, they still suffer from the use of a vague criterion. The best way to test this is to first estimate the actual distribution of “skillfulness” on a concrete measure. For example, you shouldn’t ask doctors to report whether they are “above-average skill”. You should ask them if they make X (measurable) mistake more or less frequently than the average doc. Then, compare the distribution of responses to estimates of the actual underlying distribution.

    This approach would be a much better test of the hypothesis by comparing the distribution of responses to an estimate of the true underlying distribution, rather than by comparison to some imagined distribution that may or may not be symmetrical.

    • +1

      All of this type of research relies on the belief that some “true score” exists in reality for an abstraction that is crudely measured against which deviation from that is deemed biased reasoning.

      • Yes, but not all true scores are equally non-existent. Sure, there is no verifiably true distribution for anything in the real world, but that doesn’t mean the distributions we approximate from data aren’t useful or “true enough”. My point was mostly that the distribution of a vague criterion is nearly useless whereas a distribution of a concrete criterion that everyone measures the same way can be very useful.

        In principle, there’s no reason why we can’t get answers to the kind of question Andrew was asking.

        • “My point was mostly that the distribution of a vague criterion is nearly useless whereas a distribution of a concrete criterion that everyone measures the same way can be very useful.”

          “Can be very useful” –yes, but just because it *can* be useful does not assure that it will be used carefully/appropriately/ or usefully. (“The price of good conclusions is eternal vigilance,” or something like that.)

        • Good point. Concreteness does not guarantee usefulness nor does it guarantee that people who find it useful will use it appropriately. On that note, I should also admit that vague criteria can still be useful as well. For example, simply asking people whether they have “high self-esteem” is surprisingly useful for inferring other unmeasured traits and predicting some future behaviors. Yes, my lack understanding of the processes generating responses to that item is nearly complete, but that doesn’t change the fact that it can be useful for making certain inferences.

          In the end, perhaps the best thing to do is to favor concreteness where possible? Maybe that’s why studies claiming things like “people think they’re smarter than they really are” annoy me: They could have easily tested the hypothesis with a concrete criterion, but chose a vague one anyway.

    • “people’s intuitions turn out to be surprisingly accurate”

      Yes “in many cases” – whatever that means – that might be true. But is it true quantitatively for the bulk of people in any given profession the bulk of the time? Surely not.

      If everyone’s so smart, why isn’t a single company following Amazon’s business model, while many companies that it competes with directly are going bankrupt and have had declining sales directly attributable to Amazon for over a decade? The bankruptcy of Sears, JCP, and the probably imminent bankruptcy of Macys have been in view for a long, long time, yet their management didn’t respond in any significant way to the threat posed by Amazon. All of these companies were previously weakened by their inability to compete with Wal*Mart as well, so they have a tradition of failure going back several decades.

      I could say the same about McD’s, which has been alone at the top of the burger industry for five decades with only occasional serious competition. BK and Jack serve essentially the same fare. Why are they failing miserably in comparison to McDs?

      We *could* say these companies have brilliant management teams who’s “intuitions turn out to be surprisingly accurate…in many cases”. Just not accurate enough often enough to survive.

      • I made no claims about the behavior of groups or institutions. We’re talking about the inferences of individuals and the story presented by social psychology (and behavioral economics) for over four decades has been something to the effect of “individuals are astonishingly stupid cuz we haz haphazardly designed monkey brains”.

        What I personally find interesting is the less popular literature showing when people are able to form accurate judgments with surprisingly little information. Whether any of that is useful for the goals of a company is beside my point, but you do raise an interesting question: Why don’t companies seem to be taking advantage of the possibility that the judgments of their employees could be used to guide decision-making at the high level? One reason might just be that the kinds of judgments that would be useful for the company as a whole aren’t the kinds of judgments people are skilled at generating. Another reason might be the potential difficulty of convincing shareholders that it was a good idea to base some decision on crowd-sourced data. Who knows, there’s lots of reasons why groups do dumb things even when they’re comprised of smart/capable people.

      • That might not be a matter of intelligence or even strategy, but of there really only being “room for one at the top”.

        Directly competing by trying to copy the market winner’s strategy often doesn’t work, since it comes off as a poor imitation of the “real thing” – e.g. Nook vs. Kindle, Bing vs. Google.

        In some fields turnover is easy — e.g. social networks. But I don’t think Amazon will be easily displaced. It would probably take a really large change, analogous to the shift toward online sales that made them successful in the first place.

        • > That might not be a matter of intelligence or even strategy, but of there really only being “room for one at the top”.

          > Directly competing by trying to copy the market winner’s strategy often doesn’t work, since it comes off as a poor imitation of the “real thing” – e.g. Nook vs. Kindle, Bing vs. Google.

          Your points remind me of the idea of “frequency-dependent selection” where the fitness of a particular phenotype declines as it becomes more prevalent in the population. In this case, the “phenotype” is the strategy of the company. When google was “growing up”, there wasn’t already an established google to compete against. A younger clone of google would probably not experience the same success if it were to start its development today.

        • Yes, exactly.

          It would probably take a shift in the economy/technology as significant as the rise of the Internet in the 90s to allow new companies to directly displace Amazon and Google.

          Which doesn’t mean those companies couldn’t fall from their positions by other means — just probably not by direct market competition, unless they were already weakened by internal company problems. (E.g. a different leadership could make the company change its way of doing business, so it became less attractive to many customers.)

          Government anti-monopoly actions or world events could also change things.

  6. Everyone is free to devise a metric for themselves.

    Doctor A has excellent readmission numbers for her knee surgeries, but works in an HMO that triages the difficult cases elsewhere. Say, to Doctor B, who has worse readmission numbers but helps people who others won’t.

    Yeah, you’ve got a clean driving record, but could you make a living driving a taxi in Rome?

    • > Everyone is free to devise a metric for themselves.

      I agree that this particular problem makes the results from many of these studies difficult to interpret. However, there’s no reason why researchers can’t require participants to provide estimates of how they compare on very specific criteria (e.g., readmission rates). To me, the interesting question is whether this phenomenon persists after addressing the concern you raise here.

    • Yes, this is a great point. I wonder if its been looked at? As a follow-up to the question of where people rank themselves, could there be a second set of questions about what precisely they were ranking?

    • Don’t be so hasty — there is another alternative that could be concluded: that political scientists in general are below average (she said curmudgeonly). ;~)

  7. I think part of this is also due to what I call the Lake Wobegon Effect. Many of these skills that a lot of people think they’re above average at are multi-dimensional, not something simple like “height”. The dimensions at which an individual is good at are the same dimensions that the individual think carry the greatest weight in the overall skill. A doctor with good bedside manner but poor ability to identity rare diseases thinks that good bedside manner is more important than being able to identify rare diseases, so would be a ‘better’ doctor than one who is otherwise equal but has those skills reversed. I looked at a real-life example of hitters in the American League using three measures of hitting skill and found that 73% of them could be considered above average if they were allowed to choose the weights of those measures.

  8. I just think that it is kinda comical for any of us to think we are so much better than others. We are bred to be competitive that also breeds dysfunctional behaviors and arrogance.

    I do the best I can to foster and support great thinking. That is enough of a challenge.

    • > We are bred to be competitive that also breeds dysfunctional behaviors and arrogance.

      While I think its true that we have strong tendencies to act competitively, we also have strong tendencies to act cooperatively. Both are strategies for success that we all possess. To me, the interesting question is how different environments and experiences can lead people to prefer one orientation over the other.

  9. With professors and teachers it is the reverse:
    1. Your students’ last teacher was better than you. . . .
    2. Your students hate you. . . .
    3. Students always leave you, but rarely come to office hours . . . .
    4. You’ve failed a lot of your students. . . .
    5. You know what you know, but you don’t know what you don’t know. . . .
    6. Your victories belong to the administration, your failures belong to you. . . .
    7. You do a terrible job satisfying your own values. . . .

    Maybe this is why so many teachers/professors are unhappy.

  10. One less known space is the poor adaptation of docs of technology. This extends from doctors interaction with badly designed EHRs(think about workflows with tons of giant incomprehensible dropdowns and options) they used to interact with the insurance companies or how they share information with other relevant doctors about their patients. One neuroscientist that I know of showed me how these systems worked and there was an implicit bias that he pointed out that many doctors think that theses methods and tech tools are good enough and don’t need improving. For example before zocdoc came along they all had you fill out the same format over and over.

    • I think you mean “adoption” rather than “adaptation” though both work, but have somewhat different meanings. As far as doctors’ resistance to EHRs, there is another contributing factor – distrust. Insurers (and possibly other entities) can use this data against you (comparing “productivity” across doctors, benchmarking standards of care, etc.). While such practices can be used to improve health care, they can also be used more nefariously. I believe many physicians are distrustful of how this information will be used, and this contributes to a distrust of some “technological improvements.” Badly designed EHRs and all the usual human resistances to new things certainly play parts as well.

Leave a Reply to Jordan Cancel reply

Your email address will not be published. Required fields are marked *