Skip to content
 

“Bullshitters. Who Are They and What Do We Know about Their Lives?”

Hannes Margraf writes:

I write to make you aware of a paper with the delightful title “Bullshitters. Who Are They and What Do We Know about Their Lives?” [by John Jerrim, Phil Parker, and Nikki Shure]. The authors examine “teenagers’ propensity to claim expertise in three mathematics constructs that do not really exist” and “find substantial differences in young people’s tendency to bullshit across countries, genders and socio-economic groups.”

The study looks well made to me as a layman. However, being an avid reader of your blog, I have learned to be careful about papers making claims that confirm my own believes. I was wondering if you could have a look at the paper and perhaps discuss it on your blog. It would be a shame if the authors were themselves bullshiters.

My reply: I like the title and the article itself. My only comment is that a lot more could be done with these data.

The idea is clever: The researchers found a survey that included some questions where Yes responses could be interpreted as “bullshitting,” and then they looked at who are the people who gave those Yes answers:

To begin, we divide participants into four approximately equal groups (quartiles) based upon their scores on the bullshit scale. Those in the bottom quartile are then labelled ‘non-bullshitters’ (i.e. those young people who overwhelmingly said that they had not heard of the fake mathematics constructs) with the top quartile defined as the bullshitters (i.e. young people who claimed expertise in the fake constructs). Then, for these two groups, we compare how they responded to each of the self-efficacy, problem-solving, popularity and perseverance questions (and overall scale scores) described above.

This is good. And I also like comparing the high to the low group and discarding the middle (but I think it would be more efficient to discard something like 40% rather than 50% in that middle group).

But then they present their results as a bunch of tables of averages, regression coefficients, and statistical significance. It’s hard for me to make sense of all this, and what I really want to see are some graphs.

Just to be clear: my desire for graphs is not just a matter of presentation or packaging of results. My concern is that there’s a lot going on and I don’t know how to think about all of it. Visualization is crucial here.

47 Comments

  1. Z says:

    Obvious question- why would believe the bullshitters’ answers to the other questions?

    • It isn’t necessarily about believing them. I mean if someone says “I like to solve hard problems” we don’t necessarily believe that this is a correct answer, as much as perhaps this indicates a self image of a powerful intellect or something. So, is bullshit on math questions an indicator of overconfidence in mental abilities is one question we might be asking here.

      • Z says:

        how would you know it’s overconfidence?

        • Evidently if you’re confident that you know all about a thing that doesn’t exist… it’s overconfidence… As for other questions, I think it’s legitimate to look at a model in which bullshit answers indicate overconfidence, and see what that would predict, and then an alternative model where confidence is justified and see what that predicts… In the end you’ll probably find that you need more fake questions to better gauge bullshitting, and then one or the other model will dominate.

          • Z says:

            “Evidently if you’re confident that you know all about a thing that doesn’t exist… it’s overconfidence”
            No, it’s bullshit. They’re different.

            • It’s only bullshit if you know it’s bullshit. If you really do think you’ve heard of something but you haven’t… it’s not bullshit it’s overconfidence. So, in order to tease these two effects apart you’d need to run a big questionnaire, with many possibilities to bullshit, and then eventually get people to admit they were bullshitting… For example, if 1/3 of the questions were bullshit, and you just went down the line filling in random bubbles, it’d look like you were bullshitting a fair amount, but in fact you were just generating random numbers…

  2. John Hall says:

    You mention that you like that they broke the bullshitters into groups based on a score, but would prefer just having a high and low group. I was under the impression that it was not recommended to discretize data that is originally continuous without some kind of good reason (maybe the relationship is not linear?). Am I mistaken?

    • Dichotomizing is generally a bad practice, for example it emphasizes some kind of linearity, that only the “endpoints” of the scale matter. Trichotomizing is the minimum number of discrete categories that could show you a nonlinear relationship, so I think this is why Andrew likes the 3 categories.

      My preference would be to consider the BS score as a continuous predictor and show graphs of raw data vs the score, with jitter in the score if needed (sometimes these scores might only fall on say integer values so the points would all collapse)

      • In this case there are 3 questions that are fake, so the scores could be 0,1,2,3

        The paper says:

        Critically, of these 16 constructs, three of them (items 4, 10 and 12) are fake; students are asked about their familiarity with some mathematics concepts that do not exist. We use participants responses to these three items to form our ‘bullshit’ scale. This is done via estimation of a Confirmatory Factor Analysis (CFA) model, with the three fake items treated as observed indicators of the latent bullshit construct. These MGCFA models are fitted using Mplus(Muthén andMuthén, 1998-2017), with the final student weights applied and standard errors clustered at the school level. A WLSMV estimator with THETA parameterisation was used to account for the ordered categorical nature of the questions (Muthén et al. 2015).

        Which sounds like mumbo-jumbo to me because I’m not familiar with the software tool Mplus nor the named MGCFA analysis or the WLSMV estimator or any of that (this is my least favorite kind of statistics, the “statistics as a big scientific calculator with the operator’s main role being choosing which button to press”)

        What would I do? At first look, I think I’d use the answer to the 3 questions to inform a Bayesian parameter for a bullshit score, and then use the bullshit score together with things like age, sex, country to model the answers to the other questions of interest like overconfidence measures or popularity measures or whatever. Of course the Bayesian model would have feedback between the estimate of the underlying score and the model for what the underlying score implies about other responses, because that’s what Bayes does…

  3. I read the first couple paragraphs of Andrew’s post and then clicked the article. I then proceeded to scroll through the PDF looking for the graphs… when I reached the end without finding a single one, I concluded that I knew who the bullshitters were…

  4. John Doe says:

    Are the bullshitters better at solving actual real tasks tho? Because that would mean that while they might be bigger bullshitters, their higher confidence is not entirely fake.

    • This is an interesting point. Compared to my experience below: https://statmodeling.stat.columbia.edu/2019/10/23/bullshitters-who-are-they-and-what-do-we-know-about-their-lives/#comment-1147000

      Bullshitting itself can be a skill that allows people to accomplish things by breaking down social barriers to accomplishment.

      For example, suppose a difficult task comes along at work, there are two newish young employees. Jr1 is very skilled at a certain type of thing, and Jr2 is moderately skilled but a decent bullshitter…

      Boss: I really need someone to fix this thing that is producing problems for us with our customers…

      Jr1: Hmm, I really think that’s going to be a hard task that will require a bunch of resources and at least a month to accomplish (this is an accurate assessment).

      Jr2 bullshitter: Hey, that sounds like an interesting problem I bet I can do that, we just need to XYZ and PDQ, I think I can get it to you by monday…

      Jr2 gets task… on monday: well there are some hangups, but I’m making a lot of progress, I think maybe I can complete it some time this week.

      Jr2 on friday: Wow, the task is a lot harder than it looks, but if I could get Jr1 to work for me on this task I’m sure I could make a lot more progress…

      After a month or so Jr1 has done most of the work, the total cost is even bigger than the initial estimate, and it’s take 20% longer. If the decision maker had accepted the original estimate of Jr1 they probably wouldn’t have undertaken the project at all, but by letting the boss think things weren’t so bad, Jr2 causes the project to happen, and gets credit for accomplishing it, even though maybe Jr1 did most of the work…

  5. Dale Lehman says:

    I’m a bit surprised by Andrew’s generally positive view of this study. Aside from Daniel’s insightful dismissal for lack of graphs, and John’s query about discretizing a continuous variable, I am struck by an issue concerning measurement. I am not convinced that someone claiming familiarity with a concept such as “proper numbers” is a bullshitter. They may well have a different concept in mind and have erroneously applied that to the made-up term. This could reflect overconfidence, bullshitterness (how’s that for a new term?), or greater knowledge of mathematics than other people (despite making the error). Andrew repeatedly talks about the importance of measurement, and I am not at all convinced that the measures here accurately measure what they claim. Then, when it gets covered up with endless NHST, I’m not at all sure what we get from that.

    • The proper number thing in particular was one I was not quite sure about. There are “proper fractions” and after googling I realized I was remembering the concept of Perfect Number.

      I don’t think someone with a greater knowledge of mathematics would be confused by “subjunctive scaling” or “declarative fraction” though, those don’t sound like anything I can think of.

      A Bayesian model of a latent bullshitter score could handle this to some extent. Let the answer to “proper number” provide less information about the bullshitter score for example.

      • Also the concept of a “proper fraction” is one of the least mathematical things I can think of. It’s a sort of sunday school teacher idea of what fractions are for. Similarly although I think sqrt(2)/2 might have been an easier representation to calculate with back in the days of books of tables of numbers… 1/sqrt(2) would obscure the meaning less in most algebraic settings, and the numerical calculation is no different these days.

      • Martha (Smith) says:

        Daniel said, “I don’t think someone with a greater knowledge of mathematics would be confused by “subjunctive scaling” or “declarative fraction” though, those don’t sound like anything I can think of.”

        I believe that both “subjunctive” and “declarative” are words involved with grammar, etc. — I wonder if they were chosen deliberately for this reason.

        • I’m guessing yes. but I suppose this could be confusing. If you know a fair amount of both mathematics and language you might know just enough to get confused here. Like you’ve heard of “subjunctive” but you can’t remember whether it was a math concept or a grammar concept, so you put down some level of familiarity. You’re not bullshitting, you’re just somewhat knowledgeable but confused…

          It might have been better to choose some truly meaningless but plausible sounding words

          renicatative scaling and unrefortean fraction for example.

        • Another option would be to include something where it’s a real math concept but would be extremely rare for a student at this level to actually know something about. Maybe “Mergelyan’s theorem” or “The Runge Phenomenon” or “Almost Sure Convergence” or something. You might get an epsilon fraction of students who really did know something about those things, but it’s small enough that you can consider epsilon=0 without significant error.

    • I see your points, Dale. While I’m at it, I will claim Cluelessness. That makes me confident that I won’t have to embarrass myself.

      • I can only speak for myself. I have rarely if ever claimed I knew a lot b/c I was surrounded by those who I assumed knew a lot more than I. That is to say that I deferred to others. But in having had to deal concretely with opinions and claims of evidence in various domains, I came to the speculation that each of us, inadvertently or deliberatively, make claims for which are not sufficiently taken to task. I am willing to be put to the test if it means preventing and reducing harm. If I’m found wrong, I am relieved actually. Then again, I don’t claim to be an expert. My interest in statistics, in particular, is a hobby. I am more aligned with Philip Tetlock’s exercises in judgment, as it relates most to foreign policy decision-making.

        Plus, I think stories play a wide role in decision-making. This point was brought home to me by many. Stories contain elements of fiction and non-fiction. And perform the function of animating new ideas. Making up stories was part and parcel of the oral tradition through centuries. I digress maybe. But ‘Bullshit’ covers a wide span of knowledge acquisition and dissemination.

    • Andrew says:

      Dale:

      I’d characterize the study as having a cool idea but with some problems in execution. One reason for my generally positive view is that it seems to me that someone could improve the execution while preserving the cool idea.

      • I agree with this actually. I have experienced working with or for top rate bullshitters, and it’s actually interesting to see how effective they are at getting things they want. A lot of the barriers to accomplishing things are basically social. Person A really wants something, but doesn’t believe that person B can provide it. Person B *can* provide it but is not capable of convincing person A… nothing is accomplished even though potentially much of benefit could have happened.

        usually the more capable person B is (having spent a lot of time and effort to become good at something) the less capable of convincing people, which is a separate skill… unless they’re a bullshitter. It really is a skill that can be both misused and used well.

        I suspect the profession of Bullshitting is known as “trial lawyer” ;-)

        One thing I’ve found with first rate bullshitters is that if you call their bluff they often acknowledge it without ceremony and move on to the next bit of BS. Bad bullshitters dig in and defend their position like a company of Marines, which reduces their credibility and sidetracks them from their goals. Good bullshitters step around an attack and keep moving forward… it’s as simple as “yeah, actually you might be right, but what I really wanted to focus on was xyzpdq”

        • Clarification Note: I’m not saying anything about Marines here, except that when necessary they can dig in and defend a position thoroughly.

        • Great post-Daniel. That is my experience. In fact, some subsets maintain a group of enablers who help them keep up their BS. Even self-righteous truth-telling claimants maintain their flock.

          Now interestingly, I see some of these same behaviors in self-righteous truth-telling claimants & ideologues. They tend to be experts in moving the goalposts. I raise this because of some of them are wrong in some dimension of their thinking. And don’t want to face the wrath of their fury or passive-aggressiveness if we point it out. I had this experience about a decade ago. It’s often an untreatable condition. It’s also one reason why improving the epistemic environment is the challenge of all times.

        • Great Post Daniel. I don’t think BSing is the province of any one field or profession. The flip side is self-righteous truth-tellers who, in some cases, accrue acolytes who guard their leader relentlessly against all challenges. After all, even truth-tellers may be inaccurate in some dimension of their work. Some of the same behaviors you identify, I saw two decades ago. Moving the goalposts is something that really irks me.

          • This aspect of BS isn’t entirely bad, or entirely good. It’s good to have someone who can un-ruffle the feathers so that people who know what they’re talking about can avoid unnecessary conflicts. So having a BS person “on your team” can be a benefit. Of course, if your team is mostly BS, there are environments of high information asymmetry where that benefits the BSers and harms others… and there are environments of fairly symmetric information where everyone realizes that it’s BS and starts ignoring that team…

            This dynamic has played out in the capital markets a bunch in the last 20 years… the DotCom bubble was a lot of BS, first being successful then eventually being outed, similarly for the mortgage bubble, similarly for the current tech bubble, with WeWork and Uber and Netflix and etc having shocking quantities of BS built into their valuation which is starting to shake out…

            But, while Amazon and Google may have been helped by a bunch of BS, they both came out of the DotCom era ultimately healthy, because the BS was there to lubricate what ultimately was a real value proposition…

        • jim says:

          “I suspect the profession of Bullshitting is known as “trial lawyer” ;-)”

          I think all of the leading presidential candidates (except Bernie?) – from both parties – have already been caught. Lets see, we have it almost daily from Trump; uncle Joe told some tall tales about war; we all know now Warren isn’t a Cherokee. I can’t think of a Bernie right off hand… Is there one?

          We know Trump never claims a lie – he lies intentionally to illustrate what is true, which negates(?) the lie (it really did rain in Alabama that day); Uncle Joe usually clarifies that he wasn’t actually lying, but was misunderstood; Warren owned up to the statement – but like any good lawyer claimed she had no idea that it was false or that giving a false statement in the matter was relevant.

          So ther is one profession where its easy to fond bullshitters…

  6. Table 6 is particularly problematic. There are some small differences in responses to things like “I make friends easily at school” or “I feel happy at school” or soforth, but there’s some standard error estimate that’s quite small, so each of these is highlighted as a statistically significant difference…

    The model assumptions that go into this are non-obvious without basically looking at the code. It doesn’t surprise me that people who respond saying that school is ideal and they are happy there and enjoy it and soforth would be people who really do know more stuff about math, and maybe confused the names of concepts, as in Dale’s point above.

  7. I’ll read the entire paper this evening. Looks very interesting. I hate to prejudge the content based on comments here. As you know some of you are weely weely persuasive.

  8. jd2 says:

    This reminds me a lot of validity indicators in psychometrics. I forget if it’s the CPI or the MMPI but there’s a question I’ll always remember: “Raymond Kertesz is one of my favorite poets” or something like that. Basically it’s just a line of questions to measure impression management, or how much the subject may be exaggerating. This study has a much saucier title, though.

    Also, a big weakness to me seems to be in their claim that immigrants are higher on the bullshit scale. If English is my second language and I read “Proper Number” then maybe I think to myself “this must be the English word for ‘even’ or ‘prime’ or ‘perfect’.” Or perhaps–“hey I don’t know what subjunctive means, but I do know all these other math concepts so I probably do know the concept.”

    https://www.upress.umn.edu/test-division/mtdda/mmpi-2-symptom-validity-scale-fbs

  9. Dzhaughn says:

    The study’s conclusions are invalid because they subjunctively scaled the response rates, rather than treating them as declarative fractions.

    Any correlation with affinity for journalism?

    https://www.washingtonpost.com/business/2019/04/26/rich-guys-are-most-likely-have-no-idea-what-theyre-talking-about-study-finds/?arc404=true

  10. Ethan Bolker says:

    Tangentially related – but a good reason to point people to Bill Perry’s essay

    Examsmanship and the Liberal Arts: An Epistemological Inquiry

    It begins:

    “But sir, I don’t think I really deserve it, it was mostly bull, really.” This disclaimer from a student whose examination we have awarded a straight “A” is wondrously depressing. Alfred North Whitehead invented its only possible rejoinder: “Yes sir, what you wrote is utter nonsense. But ah! Sir! It’s the right kind of nonsense!” Bull, in this university, is customarily a source of laughter, or a problem in ethics. I shall step a little out of fashion to use the subject as a take-off point for a study in comparative epistemology. The phenomenon of bull, in all the honor and opprobrium with which it is regarded by students and faculty, says something, I think, about our theories of knowledge. So too, the grades which we assign on examinations communicate to students what these theories may be. We do not have to be out-and-out logical-positivists to suppose that we have something to learn about “what we think knowledge is” by having a good look at “what we do when we going about measuring it.” We know the straight “A” examination when we see it, of course, and we have reason to hope that the student will understand why his work receives our recognition. He doesn’t always. And those who receive lesser honor? Perhaps an understanding of certain anomalies in our customs of grading good bull will explain the student’s confusion.

    You can, and should, read the rest at https://bsc.harvard.edu/files/bureau-of-study-counsel/files/bsc_pub_-_examsmanship_and_the_liberal_arts.pdf

  11. Steve says:

    I think every analysis of bullshit should begin with Harry Frankfurt’s little book On Bullshit. It’s a great read, and really clarifies exactly what BS is as opposed to other forms of nonsense.

  12. Tom says:

    Running the study at schools does have the disadvantage of working with amateur bullshitters, so it would be really interesting to see this sort of study done with some professionals. A friend of mine ran an informal study on his manager – all the team members agreed to use the same invented acronym in their performance reviews. The team manager was using the term in presentations to senior management within a week.

  13. Thomas says:

    Thanks for pointing this out!
    – the difference between anglophone countries (Table 2) is telling, the North Americans at the top and Scots and the Irish at the bottom. Either Hume, Smith and co. have left a trace, or the Enlightenment was an early indicator of low tolerance for BS.
    – I did not find the obvious analysis – the correlation between the BS score and self-reported competency on the legitimate items

  14. Steve says:

    Proper numbers exist. Consider improper fractions and proper fractions (rational numbers). The former is a,b in Z so that a > b, the latter is a <= b. Rational numbers are numbers. Then proper fractions are proper numbers.

    If I were in high school, I probably would have thought about for a few, come to that conclusion and marked “totally understand the concept”. Does that mean I’m a bullshitter?

    • proper vs improper fractions are properties of *representations* of numbers, not of the numbers themselves. The numbers themselves aren’t proper vs improper.

      An improper fraction is something like 4/2 which is 2, so is 2 an improper number?

      A proper fraction is something like 1/2 so is 0.5 a proper number, how about sqrt(1/4) or lim x->inf 1/(2+1/x)

      It’s the *representation of the number* as a fraction that is either proper or improper. Not the number itself. So, are you a bullshitter… a bit.. but in a good way, like from that Harvard essay linked above ;-)

  15. Anand says:

    I wonder how much of their results are informed by the subject matter. After all, math is a subject where males might arguably feel more pressure to signal achievement than females. Ditto also for “nerds” vs “jocks”. I would be interested in seeing a similar test done where the questions are in a subject matter that is more universal or even obviously biased in another direction, just to see if the numbers still hold up.

    Examples include:

    – “Are you familiar with the following celebrities”?
    – “Have you heard the following brand names?”
    – “Are you familiar with the following knitting patterns”?

Leave a Reply to Sameera Daniels