Some will spam you with a six-gun and some with a fountain pen

A few weeks ago the following came in the email:

Dear Professor Gelman,

I am writing you because I am a prospective doctoral student with considerable interest in your research. My name is Xian Zhao, but you can call me by my English name Alex, a student from China. My plan is to apply to doctoral programs this coming fall, and I am eager to learn as much as I can about research opportunities in the meantime.

I will be on campus next Monday, September 15th, and although I know it is short notice, I was wondering if you might have 10 minutes when you would be willing to meet with me to briefly talk about your work and any possible opportunities for me to get involved in your research. Any time that would be convenient for you would be fine with me, as meeting with you is my first priority during this campus visit.

Thanks you in advance for your consideration.
Sincerely
Alex

To which I’d responded as follows:

Hi, I’m meeting someone at 10am, you can come by at 9:50 and we can talk, then you can listen in on the meeting if you want.
A

And then I got this:

Dear Professor Gelman,

Thanks for your reply. I really appreciate your arranging to meet with me, but because of a family emergency I have to reschedule my visit. I apologize for any inconvenience this has caused you.

Alex

OK, at this point the rest of you can probably see where this story is going. But I didn’t think again about this until I received the following email yesterday:

Dear Professor Gelman,

A few weeks ago, you received an email from a Chinese student with the title “Prospective doctoral student on campus next Monday,” in which a meeting with you was requested. That email was part of our field experiment about how people react to Chinese students who present themselves by using either a Chinese name or an Anglicized name. This experiment was thoroughly reviewed and approved by the IRB (Institutional Review Board) of Kansas University. The IRB determined that a waiver of informed consent for this experiment was appropriate.

Here we will explain the purpose and expected results of this study. Many foreign students adopt Anglicized names when they come to the U.S., but little research has examined whether name selection affects how these individuals are treated. In this study, we are interested in whether the way a Chinese student presents him/herself could influence the email response rate, response speed, and the request acceptance rate from white American faculty members. The top 30 Universities in each social science and science area ranked by U.S. News & World Report were selected. We visited these department websites, including yours, and from the list of faculty we randomly chose one professor who appeared to be a U.S. citizen and White. You were either randomly assigned into the Alex condition in which the Chinese student in the email introduced him/herself as Alex or into the Xian condition in which the same Chinese student in the email presented him/herself as Xian (a Chinese name). Except for the name presented in the email, all other information was identical across these two conditions.

We predict that participants in the Alex condition will more often comply with the request to meet and respond more quickly than those in the Xian condition. But we also predict that because the prevalence of Chinese students is greater in the natural than social sciences in the U.S., faculty participants in the natural sciences will respond more favorably to Xian than faculty participants in the social sciences.

We apologize for not informing you that were participating in a study. Our institutional IRB deemed informed consent unnecessary in this case because of the minimal risk involved and because an investigation of this sort could not reasonably be done if participants knew, from the start, of our interest. We hope the email caused no more than minimal intrusion into your day, and that you quickly received the cancellation response if you agreed to meet.

Please note that no identifying information is being stored with our data from this study. We did keep a master list with your email address, and a corresponding participant number. But your response (yes or no, along with latency) was recorded in a separate file that does not contain your email address or any other identifying information about you. Please also note that we recognize there are many reasons why you may or may not have responded to the email, including scheduling conflicts, travel, etc. An individual response of “yes” or “no” to the meeting request actually tells us nothing about whether the name used by the bogus student had some influence. But in the aggregate, we can assess whether or not there is bias in favor of Chinese students who anglicize their names. We hope that this study will draw attention to how names can shape people’s reactions to others. Practically, the results may also shed light on the practices and policies of cultural adaptation.

Please know that you have the right to withdraw your response from this study at this time. If you do not want us to use your response in this study, please contact us by using the following contact information.

Thank you for taking the time to participating in this study. If you have questions now or in the future, or would like to learn the results of the study later in the semester [after November 30th], please contact one of the researchers below.

Xian Zhao, M.E. Monica Biernat, Ph.D.

Department of Psychology Department of Psychology

University of Kansas University of Kansas

Lawrence, KS 66045 Lawrence, KS 66045

[email protected] [email protected]

“Thank you for taking the time to participating in this study,” indeed. Thank you for not taking the time to proofread your damn email, pal.

I responded as follows to the email from “Xian Zhao, M.E.” and “Monica Biernat, Ph.D.”:

No problem. I know your time is as valuable as mine, so in the future whenever I get a request from a student to meet, I will forward that student to you. I hope you enjoy talking with statistics students, because you’re going to be hearing from a lot of them during the next few years!
Andrew

I guess the next logical step is for me to receive an email such as:

Dear Professor Gelman,

A few weeks ago, you received an email from two scholars with the title, “Names and Attitudes Toward Foreigners: A Field Experiment,” which purportedly described an experiment which was done in which you involuntarily participated without compensation, an experiment in which you were encouraged to alter your schedule on behalf of a nonexisting student, thus decreasing by some small amount the level of trust between American faculty and foreign students, all for the purpose of somebody’s Ph.D. thesis. Really, though, this “experiment” was itself an experiment to gauge your level of irritation at this experience.

Yours, etc.

In all seriousness, this is how the world of research works: A couple of highly-paid professors from Ivy League business schools do a crap-ass study that gets lots of publicity. This filters down, and the next thing you know, some not-so-well-paid researchers in Kansas are doing the same thing. Sure, they might not land their copycat study into a top journal, but surely they can publish it somewhere. And then, with any luck, they’ll get some publicity. Hey, they already did!

Good job, Xian Zhao, M.E., and Monica Biernat, Ph.D. You got some publicity. Now could you stop wasting all of our time?

Thanks in advance.

Yours, etc.

P.S. In case you’re wondering, the above picture (from the webpage of Edward Smith, but I don’t know who actually made the image) was the very first link in a google image search on *waste of time*. Thanks, Google—you came through for me again!

P.P.S. No, no, I won’t really forward student requests to Zhao and Biernat. Not out of any concern for Z & B—perhaps handling dozens of additional student requests per week would keep them out of trouble—but because it would be a waste of the students’ time.

P.P.P.S. When I encountered the fake study by Katherine Milkman and Modupe Akinola a few years ago, I didn’t think much of it. It was a one-off and I didn’t change my pattern of interactions with students. But now that I’ve received two of these from independent sources, I’m starting to worry. Either there are lots and lots and lots of studies out there, or else the jokers who are doing these studies are each mailing this crap out to thousands and thousands of professors. But, hey, email is free, so why not, right?

P.P.P.P.S. I just got this email:

Dear Dr. Gelman,

Apologize again that this study brought you troubles and wasted your valuable time. Sincerely hope our apology can make you feel better.

Regards
Xian Zhao

I appreciate the apology but they still wasted thousands of people’s time.

58 thoughts on “Some will spam you with a six-gun and some with a fountain pen

  1. Huh. I’m now wondering if there’s a good case to be made here for the ethics of not debriefing at all. After all, in both cases, that’s the point when your blood pressure really hit the roof… whereas if you’d never heard any more about it you’d have forgotten about the incident completely!

    I’m joking. I think.

  2. Without getting into the merits (or lack thereof) of this particular study, I can see how studies of this general type can be useful, telling us things we would have a hard time learning another way. Is there any way that you, Andrew, would consider this form of study to be acceptable? Maybe you get an email that says “We are studying an issue of practical importance that is related to the education of students from other countries. If you agree to participate, then at some point in the next year we will send you a single, short email without identifying ourselves; your response to that email (if any) will constitute the entirety of your participation. We can’t tell you more than this without affecting the results. If you’re willing to participate, please reply with ‘yes’.” Would that be OK? They’d still be “wasting” your time merely by asking you to participate, and for the subset of participants who say Yes they will have wasted more of your time, since you’ll now be reading their emails twice, not once.

    • Phil:

      Yes, this came up in our blog discussion a few years ago on that earlier study. I think pre-asking would be ok (even though, yes, it would waste more time). Another option would be to pay the participants of the study. One problem, I think, is that sending people emails is free (to the sender), thus nothing is stopping them from spamming thousands of people. Jeremy Bentham would not be pleased.

  3. I am confused by the point of the second email (unless the study is still ongoing). If the data are about your first reply, then once you have replied, you get the debrief that apologizes that you changed your busy schedule around to make sure that you had 9:50 free. IRBs are usually very particular on having as little lying as possible. So why was it necessary for them to make you think Alex had a family emergency? I guess if they were not wanting publicity to get around they had to delay the debrief, but there seems lots of other ways to design this (and Phil’s suggestion which is used in lots of social psychology is better anyway). Why make you feel sorry for whatever emergency Alex had to deal with (I’m feeling sorry for him now!).

    • Dan:

      It’s wheels within wheels within wheels, man. I’m still waiting for the debriefing that explains why the first debriefing was a fake, and that actually there’s a 1.2 million dollar bank account in Nigeria waiting for me if I just co-sign a few forms . . .

    • This whole thing brings up the IRB problem. By that, I do not mean to dump on people who serve on IRB’s. A big part of the problem is that they are volunteers who don’t really get any compensation for their participation, so can’t reasonably be expected to do a thorough job since they are juggling their IRB responsibilities with lots of other responsibilities. In that sense, it’s similar to the one of the problems with peer reviewing: Reviewers just don’t have the time to do a good job all the time.

      I don’t know a good solution, but I think part of it could be a joint effort of various professional societies and funding agencies to try to come up with guidelines that IRB’s could follow; with recommendations to institutions about how to provide quality IRB reviews (it necessarily will cost — you can’t expect to get high quality for free.)

  4. If it were me, I would withdraw my response. While “spite” is probably a good enough reason, I think there’s a pretty good principled reason to. Practically all the costs of this study are borne by the (unwilling) participants. As you point out, sending these emails cost pretty much nothing, while all the people who received them will have wasted at least a small amount of time and had their trust in communications of this sort damaged, at least in a small way. Withdrawing responses is the only way to shift some of the burden back on the researchers. I see this the same as telemarketers: I would never buy something from a telemarketer on principle, even if I was sure it was a good deal, because that would only increase the profitability (and therefore the incidence) of telemarketing.

    • Agreed. My perception of academia in the USA is that there is already a serious problem with people ignoring sincere but unsolicited email (indeed, that’s what motivates their study). The effect of the study can only be to make the problem worse – they are harming thousands of students across the globe.

      • I wonder what would have happened if Andrew’s first response was

        Hi Alex,
        It would be great to meet with you. I was going to be away those days because of a family emergency, but I have changed my plans. Changing the flight was only a few hundred dollars, but it will be worth meeting you!
        Really excited,
        Andrew

        I assume the authors knew that some people would change plans so I would hope they had something planned for replies like this.

        • Rahul:

          Yes, even a $10 coupon would’ve been ok to me, then I’d have felt fairly compensated. Actually, I don’t drink coffee so I would’ve given out the coupon to one of the students in my class. But that would’ve been just fine, I would’ve felt happy about the whole experience.

        • Lots of people don’t drink coffee. In our case, it’s only an inconvenience when we have guests and can’t offer them coffee in the morning. But no point in keeping it around for those rare occasions since coffee doesn’t “keep” very well (at least, good coffee doesn’t)

  5. As a social psych PhD, I have seen how studies like these are adored by the field because they use a behavior (rather than paper-based) dependent variable, they seem to have “real-world” implications, and they are methodologically easy compared to either laboratory studies or intensive field world.

    However, now that I’ve left that world for a non-academic career, I have to admit that these studies look more and more ridiculous to me from an outsider’s perspective.

  6. Andrew,

    I’m curious as to why you think the original study was “crap-ass”. Experiments like these seem to avoid most of the major issues of social psychology. Do you think the statistics were dodgy?

    I can see why you’d be a little annoyed, though.

    • This type of approach is actually used all the time in discrimination cases. An individual (or couple) is (are) created with certain characteristics in common and one different–race, ethnicity, sex or some other legally protected status. Then the behavior of, say, the rental agent is observed to see whether it is identical for the two types of constructed individuals (or couples) (offering identical apartments at the same rental rate, for example).

      I can see why rental companies might not like this type of behavior but it is really the only way to demonstrate pervasive discrimination, as it really is an experiment (keeping all conditions constant but changing one). But it’s a very useful technique. Andrew’s position seems to be it shouldn’t be done on him because it is a waste of time but really, there are almost no academics carrying out this type of research and the amount of time he spent was minimal. And it does provide an indicator of general societal attitudes towards legally-protected groups. I wonder if he would be happier if he received this e-mail from the Department of Education and then had a lawsuit filed against him, if, say, he, for whatever reason, denied the Chinese-surnamed student 10 minutes but gave a non-Chinese-surnamed student the 10 minutes? But that is how we monitor other decision-makers (see the above rental agent example).

      • The reason it shouldn’t be done is not so much because it wastes respondents’ time but because it conditions them against responding to future emails, thus hurting the very people the study is purporting to help.

      • The problem is that you don’t, and can’t, keep all conditions constant. This is true in any field, which is why we replicate (to average out all the varying factors we can’t, or don’t know to, control). To go with your rental agent example: were the individuals (or couples) exactly as punctual as one another? Were they equally polite? Were they wearing the same clothes, or near enough? Were they driving the same car? Was the rental agent in the same mood on both days? Was the weather the same on both days? Are there really identical apartments available? If we are monitoring decision makers this way without quite a bit of replication, then we are doing it wrong.

        From the researchers’ followup, it sounds like their sample size was 30 respondents times however many “social science and science” departments they decided to cover. That sounds pretty small to me. If even two of the 15 people in a set happened to be on vacation or at a conference the next week that would make a huge difference. They say they selected one faculty member that “appeared to be a US citizen and white” from their website picture at random. Even if we assume they actually made the selection at random (i.e. listed all the possible names and made a truly random selection from the list), they are skewing (off the top of my head) toward faculty who have a website photo and one the basis of their determination of who is “white”. From the way the email is worded it seems as though the US Citizen determination was made from the picture, which seems odd. Could that mean they skewed towards people with English-sounding names? Why limit themselves to the top 30 departments when email is free? That would imply that they are skewing toward the busiest professors. The English used in the initial email is pretty good, but not perfect–how does that interact with the name? Presumably the “you can call me by…” clause was pasted into the “Alex” set. Could that clause have an effect? It’s pretty clumsily worded (being inserted before the “a student from China”)–would a better worded email have made a difference? What about if the Chinese name wasn’t presented at all?

        Even if this study was large enough and the respondents were well-selected, the only thing being tested is whether the presence of an English name makes a difference in people’s responses to the email, so there’s no way to tell whether there is a difference between the treatment of either group with any other ethnicity. Even if there were a difference, what would that even mean? It’s hard to see it as discrimination, considering that the ethnicity of the student is obvious in both cases. Considering that people with a certain name aren’t a protected group, how is this useful? Finally, is one’s response to a short-notice request for a meeting really evidence of anything?

        There are just so many holes and confounding factors in this study that it really does seem like a complete waste of time. Even if you waste five minutes apiece of 12 people’s time, that’s still an hour wasted. And I absolutely fail to see how this study being conducted could in any way prevent or even reduce the likelihood of any anyone being sued for anything. I still say the best course of action is to withdraw consent from any study like this to keep from encouraging it to catch on.

        • There is an effort made to make the couples (say) as close to each other as possible, and they are looking for relatively overt acts of discrimination (such as offering certain properties to the non-minority couple that are not offered to the minority couple). Here is a write-up of such efforts:

          http://www.propublica.org/article/no-sting-feds-wont-go-undercover-to-prove-housing-discrimination

          I’ll leave it to you to decide if exact matching outlined by you is necessary in the cases described in this article. For those that don’t want to wade through this article, the bottom line is that housing discrimination happens all of the time (at least anecdotally) and there are relatively few resources devoted to uncovering it.

          On a more general note, there is a great divide in how minorities perceive the incidence of racial discrimination versus non-minorities. It influences research efforts also–in political science, empirical analyses of elections typically focus on issue voting and party id and are typically approached in a color-blind manor, whereas practitioners (at least in racially mixed areas), pay much greater attention to race and ethnicity. Part of it is difficulty with measuring such things, but part of it is that researchers (particularly of the dominant social class, which most academics are), don’t like to analyze in terms of immutable characteristics (it’s worse than Marxism in many ways–at least one can change one’s class, but one can’t change one’s race or gender). I don’t know whether Devane’s objections to rental stings are influenced by his race, and “check your privilege” has apparently become a way of ending any discussion in academic circles, but it helps to inspect one’s own beliefs/actions for unconscious biases.

        • I’d slow down before turning to the class and analyzing people’s motivations too much. In the context of the topic at hand, your hypothetical rental agent analogy was a useful way to talk about the potential problems with underpowered studies where confounding factors are not sufficiently controlled for nor sufficiently understood. It’s funny how describing areas of concern with an approach amount to “objecting” to it. I specifically objected to performing such an approach without sufficient replication (I should have also said without the proper controls). The fact that something is difficult to measure is no excuse for measuring it poorly. Hopefully, in the real world, rental agent studies are carefully controlled and replicated. The study at hand, though, seems to me to be underpowered and poorly controlled. It uses unwillingness to schedule an appointment as a proxy for prejudice, I guess? While given enough power, any main effect they might show may be valid, but they appear to want to also look at the interaction of that effect with department, and given their sampling approach I very much doubt that such an approach is valid. Even if they did show a valid, significant effect, it would seem to only relate to the use of an English name by Chinese students. You seemed to imply earlier that it might root out elements of discrimination, but is that really the case? In both cases, the email clearly discloses the nationality and Chinese name of the student. It’s hard to see how a difference might be a discriminatory matter, considering that it would be known to the people responding to the email (I think) that the English name was something intentionally chosen by the student.

        • I’m not “turning to class”, just pointing out that pretty much every study in psychology, sociology, and political science (that looks at it rather than ignores it) finds evidence of racial differentiation (showing a picture of an African-American as opposed to a white before asking about punishment for a hypothetical crime results in a desire for heavier punishments, etc). These biases are amenable to explication through real-world experimentation (the rental agents, say). How many social science phenomenon can we say that about (markets are an obvious one–raise prices, consumption goes down, lower prices, it goes up)?

          Getting back to this particular study, I was addressing Andrew’s claim that his time shouldn’t be wasted. My argument is that there are relatively few studies of this type and that it is a very important social phenomenon, so it comes down to I think he’s complaining too much. Social psychology, like all soft sciences, is very susceptible to fads, and this type of study is certainly faddish these days. And doubtlessly it can be done better. But measuring attitudes towards race/ethnicity/nationality in what is attempting to be a multi-racial society is important and if one “wastes” a few minutes in a year being measured, the tradeoff for society is worth it (and I think he’s wasting a lot more time reading John Updike novels rather than doing statistics, but that’s just my evaluation).

        • I actually meant physically “turn to the [rest of the] class” as in “the other students in your lecture hall”, referring to the way you addressed me, then referred to me in the third person while speculating on my motivations. It was probably a little more snide than necessary, and easily misunderstood in this context.

          “But measuring attitudes towards race/ethnicity/nationality in what is attempting to be a multi-racial society is important…”

          I think it’s worth clarifying our differences here. I don’t disagree with this statement (I wouldn’t anthropomorphize “society”, but saying “in a society that is becoming more multi-racial” doesn’t fundamentally change it). However, I don’t believe that this statement (though true) is a valid defense of this study.

          1) It (the study) explicitly doesn’t measure attitudes toward race, ethnicity, or nationality per se. It simply purports to measure attitudes (among faculty who “appear to be white”) toward the use of an English name by Chinese students. It cannot indicate what those faculty member’s attitudes towards Chinese students generally are. It also cannot make any statements about whether the respondents’ attitudes differ from the attitudes of those of other ethnicities. It purports to be able to determine whether these attitudes vary among departments, but I have serious doubts about whether that would be valid, given the way the study is constructed.

          1a) This sort of study seems ripe to me for unjustified claims. That is, even though it does not and cannot test attitudes toward Chinese students in general, I worry that claims of that nature might ultimately be made. I realize this is rank speculation on my part, and there is every possibility that the researchers will be scrupulous in their claims, but I am a bit weary of unjustified headlines, especially in the social sciences.

          2) I don’t think I’m misrepresenting you to characterize part of your argument as, “Even if this study is flawed, even deeply flawed, the subject matter is important enough that the benefits (though perhaps smaller than would derive from a better constructed study) still make it worth the costs.” Specifically, in this case, the costs are the unwilling participants’ time. Here is where I disagree with you. To analogize, drug discovery is an incredibly important field, but that is no reason to stand for shoddy research in that area. In fact, I would argue that the more important an area of study, the higher the standards of research should be. Poor work in drug discovery can lead to tremendous wastes of time and resources on dead ends, both in the original work and in follow-up work. It can cause the risks of certain drugs to be poorly understood. Perhaps most importantly in the long run, it can damage the public’s (and funding agencies’) trust in the value of the field. That is how I view a study like this. Through poor construction, it contributes very little information to the field, can yield results that are prone to misapplication, damages the trust people have in communications of this variety, and reduces the trust that others have in the field in general. The point of criticizing a poor study is not that it improves the field in a smaller fashion than it could, but that it can actively damage the field.

          I’ll go back to the rental agent analogy. While the subject matter being tested is important to test, and difficult to test otherwise, a poorly constructed sting has very real costs, with few benefits to show for it: it decreases the level of trust among people in general, generates a level of resentment among those “stung”, can miss real cases of discrimination, and can result in people being falsely accused of discrimination. If such studies were generally conducted in this manner, they would eventually generate widening cynicism about the approach, that ultimately could bleed into cynicism about the cause.

          3) Obviously, the unwilling nature of participation is problematic. While it is clear that such an approach is necessary at times, it confounds the disadvantages when the study in question is poorly constructed. If this had been a well constructed study (larger sample set, better blocking, better selection of participants, use of multiple names, etc.) I would have less of a problem with this aspect. As it is, like other spammers, the researchers are free-riding on a system that is largely based on trust. That would not be so bad if society really benefited from their work, but I suspect the biggest benefit will be to the researchers in the form of a CV line.

          4) I’m less likely than some to discount small costs that are spread widely. Although any one respondent may have only wasted 5 minutes, if there were a couple hundred participants, that amounts to two days lost.

  7. Ah, payback’s a #@&*!. Years ago I remember reading in one of Abbie Hoffman’s books where a bank pissed him off so he opened up a safe deposit box (under a false name) and filled it with a pot roast. (Actually, I can’t recall if he said he did it or if he was just floating the idea. No matter.) His speculation was that it would be about a week before the stink got so bad that the bank would have to drill the locks and remove it. The challenge of course would be to determine which box contained the roast, otherwise they’d have to starting drilling locks until they found it. Ouch.

    PS I like the study invitation protocol/etiquette that Phil suggests above. If the invitation were couched that way then I’d probably accept.

  8. How about this version?

    Original Email:

    Dear Professor Gelman,

    My name is [Name Treatment] and I am a student at [University Treatment].

    I [subject-Verb Agreement Treatment] to graduate programs in [Program Treatment], and I will be visiting Columbia in [Time Horizon Treatment] as part of my [Diction Treatment].

    I [Organizational Capacity Treatment] final dates, but [Confidence Treatment] email again before my trip and [Casualness Treatment] with you
    for [Time Demand Treatment] if [Qualifier Treatment] when I visit.

    [Deference Treatment].

    [Closing Treatment],
    [Nickname Treatment]

    Immediately upon receiving a reply to the original email, this email is sent back from the experimenter to the unwitting participant:

    Dear Professor Gelman,

    My name is [Real Name], and I am [title] at [Institution]. The email to which you just responded was part of an experiment I am running at the [Real Institution/Lab].

    I apologize that I deceived you, but the project required the small deception. Details of the experiment and hypothesis tests can be found here [link] and a project description is provided below. The linked page also includes a seal of authenticity from the [Real Institution IRB] which confirms their approval of the experimental design and also confirms that no further deception is occurring.

    I hope you will understand that your time was used for a legitimate research purpose. In order to compensate you for your time and for deceiving you, I will donate $3 in your name to a fund that will be used solely for the purpose of providing tuition and fee remittance for students from low income households [link].

    If you do not wish to have your responses included in our analysis, click the following link and your entire record will be removed from our database and the donation will not be made [link]. I ask that you do not do this, and allow us to use the information you have provided. Obtaining high quality data is, as you know, the foundation of any high quality quantitative research. I assure you that all identifying information will be purged from any publicly available data released as part of this project. The project privacy policy can be found here [link].

    Your colleague,
    [Real Name]

    *Description of the Project
    **Boilerplate IRB/Legal stuff.
    ***Re-copy the links to the study design, privacy policy and opt-out pages.

    • Jrc:

      Yes, this is much better, partly because it does not involve any scheduling and partly for the $3 (which I don’t think would be enough if the treatment were to perturb me but would be just fine in this case). Indeed, I wouldn’t see the need for the last part (“If you do not wish to have your responses included in our analysis . . .”).

      One reason that particular email bugged me so much is that I’ve been spending lots of time lately answering calls from criminals (or whatever you want to label them) who call our home phone claiming to need to fix our computer or some other scam. Which leads me to the choice of wasting time with these slimy people or else screening my calls and losing some actual personal content. The principle is the same—a public channel is being abused for private gain. The only difference is, in this case the private gain is some pair of bozos trying to get a publication on their C.V., at minimal effort and cost to themselves.

      • The allowing your responses to be excluded is standard practice especially when you have not given informed consent prior to collection. You are basically in that way given post hoc informed consent. And you always have the right to withdraw from any study at any time. In some cases when interviews are about particularly sensitive topics I’ve seen IRBs require a re-confirmation of consent at the end of the interview. However I think they should give the $3 no matter what since you gave them your time and data under false pretenses.

        The second fake email is totally unnecessary (unless as you said it’s part of a second even more sneaky Obedience like study where you think the study is about discrimination based on names, but it’s really about whether people will blog about the annoyance of participating in studies like this). When I was on the our IRB we would not have approved the second email.

        What they sent you doesn’t even meet what I would consider minimal IRB standards at our institution since they haven’t told you how you were selected. In this case I bet they spammed some mailing list rather than take a reasonable sample size given the lack of sophistication of the study. I like Phil’s idea, an then they actually could do repeated measures.

  9. There is a huge lack of replication problem in the Social Sciences.

    “Now could you stop wasting all of our time?” is inaccurate and the whole description is unkind.

    Are you proposing that we never replicate because it is a waste of time?

        • Maybe not the ideal wording but

          > In all seriousness, this is how the world of research works

          Unfortunately that is an excellent description of some of the groups I have interacted with.

          Replicate has many meanings and often this is not well discussed.

          The caveat JG Gardin used add to redone by an independent third party was something like given a real interest – was it something that _deserved_ redoing. Just redoing basically the same flawed study that produces the same misleading biased finding likely does more harm than good.

          An example I am aware was a researcher who asked me once – “Do you know a paper that argues that you can decided on whether to do fixed or random effects analysis in a meta-analysis with a test.” My answer was yes but its a flawed and problematic approach that seldom if ever should be used. He answered, “I know, but I just need something to get the paper by the journal reviewer, and that would do the trick’. They used the reference, the meta-analysis was published and given they were a well known researcher (now in the list of the top XXX biomedical researchers in the world), many others starting copying and using that method in their meta-analyses.

  10. This reminds me of “Inception”. In a couple of weeks you will get a comment on this post saying something like “Thank you all for participating. That email was part of our field experiment about how people react [on blogs and forums] to [experiments performed on other people] about Chinese students who present themselves by using either a Chinese name or an Anglicized name.”

  11. Pingback: A week of links | EVOLVING ECONOMICS

  12. Pingback: Contacting Ph.D programs prior to application season

  13. Pingback: Was it really necessary to do a voting experiment on 300,000 people? Maybe 299,999 would've been enough? Or 299,998? Or maybe 2000? - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Scienc

  14. Wow, not even close to ethical. An IRB might not stop your study to mislead volunteers during a study. But they certainly don’t have the moral authority to “approve” interfering with and lying to people who did not volunteer for your study.

  15. Pingback: Scientists behaving badly - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  16. Pingback: Crowdsourcing Data Analysis 2: Gender, Status, and Science - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  17. Pingback: Oh, it's so frustrating when you're trying to help someone out, and then you realize you're dealing with a snake. - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

    • Baptiste:

      Awesome! This paper hits all the right notes:
      – Published in an Elsevier journal
      – Forking paths (“The preference for Anglo (Chinese) names over Chinese (Anglo) names was apparent among those high (low) in assimilationist and low (high) in multicultural ideologies.”)
      – Use of p-hacked findings to make general claims (“These findings point to an important interplay between partial intergroup membership and acculturation ideologies of perceivers in predicting bias.”)
      – And, ummm, I’m not sure what else: the article is paywalled and the hell I’m gonna pay $35.95 for an article that itself is based on data that I supplied for free!

      Maybe these people will hit the real jackpot and get a Ted talk. Also, now that Mark Hauser and Amy Cuddy have retired, I guess there are two open professorships at Harvard!

Leave a Reply

Your email address will not be published. Required fields are marked *