Researchers demonstrate new breakthrough in public relations, promoting a study before it appears in Psychological Science or PPNAS

Ivan Oransky pointed me to this press release:

Study finds honesty varies significantly between countries

Research from the University of East Anglia (UEA) has found that people’s honesty varies significantly between countries.

It also suggests that honesty is less important to a country’s current economic growth than during earlier periods in history.

The study examined whether people from different countries were more or less honest and how this related to a country’s economic development. More than 1500 participants from 15 countries took part in an online survey involving two incentivised experiments, designed to measure honest behaviour. . . .

Something about that British spelling, it just makes the whole study seem classy, somehow.

Anyway, Ivan asked me what I thought of this study. I clicked through and took a look at the paper, and here’s what I said:

It’s cute. Of course it could be tricky to compare managed online panels across countries, but, sure, why not? I’ve seen worse ways of using a survey to get a headline. But this bit, “honesty is less important to a country’s current economic growth than during earlier periods in history,” that’s just ridiculous. First, whatever you want to say about their measurement procedure, at most it’s a measurement of honesty now, it says nothing about honesty during earlier periods in history. Second, a correlation says nothing about whether honesty is more or less “important” to a country’s economic growth. It’s a correlation for chrissake. Hemlines and all that.

On the plus side, there’s something charming about the fact that they’re pushing this on the media right away, not even waiting for the publication in Psychological Science or PPNAS.

12 thoughts on “Researchers demonstrate new breakthrough in public relations, promoting a study before it appears in Psychological Science or PPNAS

  1. Note the following about the experimental design:
    “Firstly, they were asked to flip a coin and state whether it landed on ‘heads’ or ‘tails’. They knew if they reported that it landed on heads, they would be rewarded with $3 or $5. If the proportion reporting heads was more than 50 per cent in a given country, this indicated that people were being dishonest.”

    Overall, they report that 71% of the subjects reported heads (21% above the “honest” benchmark of 50% – the raw amount of “dishonesty” or roughly 40% in percentage terms). One interpretation of this is surely that it measures dishonesty. However, it may also measure some confounding factors – how able the subjects are to figure out how to “game” the system, how important the monetary incentive is, etc. So, another approach might be to measure the probability that Chinese participants (with the highest measure of cheating) cheat more often than the US participants (with one of the lowest cheating rates – 64% reporting heads). I estimated this as about 88% – high, but far from the statistical significance reported in the paper.

    Using a 50% benchmark will give different results than using the 71% benchmark. I’m not sure which is really appropriate. I just finished reading Todd Rose’s book, The End of Average, and he makes the point that there is no such thing as an individual who is “honest” or “dishonest” as an essential personal characteristic. Rather, we are honest or dishonest in particular contexts. What has always concerned me about experiments such as this one (which is well-done in a number of respects) is that the context is so artificial. If the Chinese participants rampantly “cheat” in this experiment, does it follow that they cheat in business affairs? There is a saying that, in China, contracts are a few mm thick and trust is very thick, whereas in the US, trust is almost nonexistent and contracts are a thick stack of papers (poorly paraphrased without being able to use nonverbal communication). “Cheating” in this experiment is victimless whereas cheating in most important contexts comes at someone’s expense. I’m not sure I believe that one context maps well to the other.

    As I said, there is much to like about this paper, and this one facet does not do it justice. But the artificial settings of these types of experiments has always bothered me. I find the results intriguing but always am left wondering whether the findings are really generalizable (related to sampling issues and forking paths, but here it is not only the subjects and data that might not generalize but also the context).

    • Not having read the paper, I certainly hope that the researchers standardized the “reward” (i.e., $3 or $5) based on cost of living of the country and income of the participants. $5 won’t even buy you a beer in many parts of the world, whereas $3 is a substantial amount of money to individuals in other parts, even for those with comparatively good jobs for the region.

    • Regarding honesty among the Chinese, Scott Adams wrote:

      In one of my earlier career incarnations I was a banker. My job for a few years included reviewing and approving commercial loans for doctors and dentists. One day I declined a loan application for a dentist who, according to his recent tax returns, didn’t have enough cash flow to repay the loan. My boss at the time reviewed my work and turned the decline into an approval without even looking at the financials. When I asked why, he explained that the borrower had a Chinese name. I questioned the wisdom of this lending procedure and he directed me to the files of delinquent borrowers, challenging me to find any Chinese names in there. There weren’t any. I’m not judging, just telling you what happened.

      See near bottom of page.


  2. So many questions.
    How sure are we that coins in China, India, South Korea and Japan that coins are made to be even probability of both sides?
    Does the concept of heads or tails mean anything in those cultures/coins (I just read a web page that says that in Japanese currency there is disagreement about which is heads and which is tails — no idea if that is actually true).
    It looks like there are about 100 per country but we don’t know how many really in each country.

    But I agree with Dale that if you are going to make country level comparisons than 71% is the base, you don’t make all the comparisons to 50% and then start saying this difference from 50 is significant, this one isn’t.

    • Lauren:

      I think it’s just fine to publicize research that has not been published in a peer-review journal. What’s important is that there is some written document, ideally with raw data, that can be examined.

  3. So $5 is much much more significant of an incentive to a person from, say, Bangladesh than Denmark. Do they normalize the incentive for various nations tested?

  4. I find this extrapolation of a silly-online-task to real-world-relevant-honesty a bit iffy.

    e.g. People that would never steal a book or DVD from a library often download movies from torrents. Personally I have looked up info in bootleg pdf versions of books. When Dell sends me stupid surveys with a $100 Amazon Gift Card raffle I’ve randomly clicked through responses. I lie to telemarketers that I am busy in a meeting.

    All of this is dishonest. But does it correlate to the sort of “dishonesty” we care about in the real world?

  5. Not sure the difference between above the knee and knee length is statistically significant here: consider graphing hemlines and adding some error bars.

  6. As others have noted, “honesty” is not a unitary trait and is very context dependent. But, so far as I know, there’s no good, validated taxonomy of situations, so we can say “in situation A, type X people are honest, type Y less honest” and so forth.

    But I’m not going to trivialize the importance of figuring these sorts of things out. One of the foundations of modern society is taxation. Various forms of taxation depend on voluntary compliance to a greater or lesser degree. Compliance efforts by governments have costs, and should be directed by efficiency. These are important social science issues.

    Reporting on the coin flip is somewhat similar to the page in the Illinois 1040 tax form, in which you are supposed to list all the stuff you bought on the internet without paying sales tax on. (line 23; note in a feeble attempt to enforce compliance, they now indicate you can’t just leave that line blank, but have to put a zero in it). It’s well known that there’s no effective way to enforce this on normal people, since there’s no requirement for the internet sellers to report this to each state.

    Those as elderly as I am remember when interest and dividends were in the same situation. Financial institutions did not issue 1099s, and reporting was on the honor system.

    This whole area of honesty in tax compliance has big privacy concerns I will gloss over here, but has the huge advantage that it has a meaningful, measurable dependent variable — something sadly missing in much social science experimental research.

    • Thank you for that pertinent and informative comment! I am now living in Illinois (recent move) and I am quite sure that I lied on my tax filing (I pay an accountant to do my taxes but I gave him no information on my internet purchases that did not include Illinois taxes – Oh, my mistake, I had none). That just reinforces the issue of honesty being context dependent. Some rules (like this one on the IL tax form) are just so obnoxious that I might feel entitled or good about violating them – but I consider myself quite honest.

      You are right that the issue is quite important, and tax compliance is an area where this is true. So, the real question is whether there is any correspondence between willingness to cheat in an online experiment and willingness to comply with the tax laws. I’m not sure that there is. Once cultural difference that I suspect might pertain to this experiment is that societies vary in terms of how sensible the rules seem, and perhaps more importantly, how much input the public has to the establishment of those rules. I can easily imagine that Chinese (living in China) participants might find meaningless rules to be something to break, whereas the Swiss might feel the opposite. Of course, there are a number of institutional features of the two countries’ political systems that might account for that. Absent anything in the experimental design to collect information about these attitudes, we are left with the finding that the Chinese are less honest than the Swiss. And, we are left wondering whether this applies to any meaningful context in which we might find that important.

      Seems like a considerable amount of effort went into an experimental design that was ill conceived to begin with.

      • “Seems like a considerable amount of effort went into an experimental design that was ill conceived to begin with.”

        My impression is that this happens a lot — that one way to improve the quality of research in a lot of fields is to improve the quality of the experimental designs. In other words, in addition to more post-publication review and more pre-publication review, we need more pre-data-collection review.

Leave a Reply

Your email address will not be published. Required fields are marked *