Skip to content

Cry of Alarm

[cat picture]

Stan Liebowitz writes:

Is it possible to respond to a paper that you are not allowed to discuss?

The question above relates to some unusual behavior from a journal editor. As background, I [Liebowitz] have been engaged in a long running dispute regarding the analysis contained in an influential paper published in one of the top three economics journals, the Journal of Political Economy, in 2007. That paper was written by Harvard’s Felix Oberholzer-Gee and Kansas’s Koleman Strumpf and appeared to demonstrate that piracy did not reduce record sales. I have been suspicious of that paper for many reasons. Partly because the authors publicly claimed that they would make their download data public, but never did, and four years later they told reporters they had signed a non-disclosure agreement (while refusing to provide the agreement to those reporters). Partly because Oberholzer-Gee is a coauthor with (the admitted self-plagiarist) Bruno Frey of two virtually identical papers (one in the JPE and one in the AER) that do not cite one another. But mostly because OS have made claims that they either knew were false, or should have known were false.

Although I have been critical of OS (2007) since its publication, it was not until September of 2016 that I published a critique in one of the few economics journals willing to publish comments and replications, Econ Journal Watch (EJW). [I also have a replication of a portion of their paper not reliant on their download data, that is currently under review at a different journal.] The editors of EJW invited Oberholzer-Gee and Strumpf (OS) to submit a response to my critique, to be published concurrently with my critique, but OS instead published their defense in a different journal,Information Economics and Policy (IEP, an Elsevier journal behind a paywall).

OS’s choice of IEP was not surprising. Among other factors, the editor of the journal, Lisa George, was a student of Oberholzer-Gee (he served on her dissertation committee), had coauthored two papers with him, and listed him as one of four references on her CV. IEP clearly fast-tracked the OS paper—it was first submitted to the journal on October 13, and the final draft, dated October 26, thanked three referees and the editor. The paper was published in December, although it often takes over a year from submission to publication in IEP.[1]

I had spent years attempting to get OS to publicly answer questions about their paper, so I was delighted that OS finally publicly defended their paper. Their published defense still left many questions unanswered, however, such as why the reported mean value of their key instrument was four times as large as its true value, but at least OS were now on the record, trying to explain some of their questionable data and results.

As a critic of their work, I took their published defense as a vindication of my concerns. Although their defense was superficially plausible, and was voiced in a confident tone, it was chock full of errors. For example, in EJW I had noted that OS’s data on piracy, which was the main novelty of their analysis, exhibited unusual temporal variability. I knew that OS might claim that this variability was a byproduct from a process of matching their raw piracy data to data on album sales, so I measured the variability of their raw piracy data prior to the matching process, and included a paragraph in EJW explicitly noting that fact. Yet in IEP, OS mischaracterized my analysis and claimed that the surprisingly large temporal variability was due to the matching process. Not only was their claim about my analysis misleading, but their assertion that the matching process could have materially influenced the variability of their data was also incorrect, which was clearly revealed by visual inspection of the data and a correlation of 0.97 between the matched and unmatched series. The icing on the cake was their attempt to demonstrate the validity of their temporal data by claiming a +0.49 correlation of their weekly data with another data set they considered to be unusually reliable. In fact, the correct correlation between those data sets was ‑0.68 (my rejoinder provides the calculations, raw data, and copies of the web pages from which the data were taken). All these errors were found in just the first section of their paper, with later sections continuing in the same vein.

After I became aware of the OS paper in IEP, I contacted the IEP editor and complained that I had not been extended the courtesy of defending my article against their criticisms. Professor George seemed to understand that fair play would require at least the belated pretense of allowing me to provide a rejoinder:

I welcome a submission from you responding to the Oberholzer – Strumpf paper and indeed intended to contact you about this myself in the coming weeks.

She also seemed to be trying to inflate the impact factor of her journal:

As you might be aware, IEP contributors and readers have rather deep expertise in the area of piracy. I would thus [ask] that in your response you take care to cite relevant publications from the journal. I have found that taking care with the literature review makes the referee process proceed more smoothly.

The errors made by OS in IEP seemed so severe that I thought it likely that IEP would try to delay or reject my submission, both to protect OS and to protect the reputation of IEP’s refereeing process. Still, I had trouble envisioning the reasons IEP might give if it decided to reject my paper.  I decided, therefore, to submit my rejoinder to IEP but to avoid a decision dragging on for months or years, I emphatically told Professor George that I expected a quick decision, and I planned to withdraw the submission if I hadn’t heard within two months.

Wondering what grounds IEP might use to reject my paper indicated an apparent lack of imagination on my part. Although the referees did not find any errors in my paper, the editor told me that she was no longer interested in “continued debate on this one paper [OS, 2007]” and that such debate was “not helpful to researchers actively working in this area, or to IEP readers.” Apparently one side of the debate was useful to her readers in December, when she published the OS article, but that utility had presumably evaporated by January when it came to presenting the other side of the debate.

Since Professor George was supposedly planning to “invite” me to respond to OS’s article, she apparently feels the need to keep up that charade, and does so by redefining the meaning of the word “response.” She stated: “I want to emphasize that in rejecting your submission I did not shut the door on a response…IEP would welcome a new submission from you on the topic of piracy that introduces new evidence or connects existing research in novel ways.”

Apparently, I can provide a “response,” but I am not allowed to discuss the paper to which I am supposedly responding. That appears to be a rather Orwellian request.

I have complained to Elsevier about the incestuous and biased editorial process apparently afflicting IEP. We will see what comes of it. The bigger issue is the quality of the original OS article, the validity of which seems even more questionable than before, given the authors’ apparent inability to defend their analysis. This story is not yet over.

  1.   The other papers in that issue were first received March 2014, September 2015, February 2015, November 2015, and April 2016.

Wow. We earlier heard from Stan Liebowitz on economics and ethics here and here. The above story sounds particularly horrible but of course we’re only hearing one side of it here. So if any of the others involved in this episode (I guess that would be Oberholzer, Strumpf, or George) have anything to add, they should feel free to so so in the comments, or they could contact me directly.

P.S. I hope everyone’s liking the new blog titles. I’ve so far used the first five on the list. They work pretty well.

P.P.S. A commenter writes of a potential conflict of interest, that Liebowitz runs a center that receives funds from the record industry and serves as an expert witness for these companies as well.


  1. First of all, yes, I have been enjoying the blog titles! My favorites so far are this one and “The Mannequin.” It’s fun to see how well they work.

    I was struck by Stan Liebowitz’s comment: “Apparently, I can provide a ‘response,’ but I am not allowed to discuss the paper to which I am supposedly responding. That appears to be a rather Orwellian request.”

    In addition, it seems that the editors are steamrolling any substantial response in the name of “moving on”:

    “I want to emphasize that in rejecting your submission I did not shut the door on a response…IEP would welcome a new submission from you on the topic of piracy that introduces new evidence or connects existing research in novel ways.”

    This reminds me not only of Fiske’s “soldiering on,” but of a larger cultural tendency to avoid controversy, uncertainty, and reckoning, in the name of “moving on” and focusing on the “new.”

    People criticize this sort of talk when it comes from Kellyanne Conway–but it’s much more pervasive than that.

    • After following some links, reading some comments, and thinking it over, I see more of the complexity of the situation. I don’t know who is right here, and I know little about piracy. But it would be good, in general, if journals made room for ongoing debates and critiques, provided the arguments had merit and were not redundant.

  2. Jonathan (another one) says:

    Wow… this dispute has been going on for a really long time. It was covered by the Chronicle of Higher Education almost nine years ago. and again in 2010: I had no idea that Liebowitz was still after them. I asked a colleague of mine tangentially involved in this dispute what he thought about this latest sally. His response: “[T]hat paper has an obviously wrong conclusion and time has only further proven it wrong. So, you know. There’s that.”

  3. John says:

    This —-> “As you might be aware, IEP contributors and readers have rather deep expertise in the area of piracy. I would thus [ask] that in your response you take care to cite relevant publications from the journal. I have found that taking care with the literature review makes the referee process proceed more smoothly.”

    Looks like a gangster talking.

    It’s worth repeating the last part:

    “I have found that taking care with the literature review makes the referee process proceed more smoothly.”

    • John says:

      No editor ever replied that to me, I would list this journal on the black list.

    • Arno says:

      Apparently my English is not good enough: what exactly does the statement about “taking care with the literature review” imply? Any hints appreciated.

      • Michael says:

        From the context “taking care with the literature review” means citing IEP papers, and “make the referee process proceed more smoothly” means your paper will be accepted. In other words, if you want your paper to be accepted for publication, you better cite IEP papers to raise the impact factor.

    • Robin Morris says:

      I wonder how common it is for editors to try to inflate their journal’s impact factors? I once had an IEEE editor ask that I cite more papers from the journal I was submitting to. I don’t remember their being a threat along with the request, though.

      • Marcus says:

        Pretty common – a favorite tactic in my field is to put paper up online for years so that they can accumulate citations and be disseminated before they are finally included in an issue.

        • Rahul says:

          As an aside, is this evidence of the field being relatively static? i.e. People thrashing about the same beaten arguments for years without no real progress?

          If you wait for years shouldn’t the research have advanced so much or others scooped you to the point where your work is essentially old news and nonpunishable?

          I often feel that for some areas of scholarship you could pull articles out of a 1960’s journal issue, make cosmetic changes & pass them off as a current issue without anyone noticing. I think that’s sad.

  4. Jonathan says:

    To take the Journal POV: this is old, so if you have something new then fine but we’re not going to revisit old stuff because you want to do that. And by old, “we” mean the recording industry has changed dramatically over the past few years, let alone since 2007 – the year the iPhone came out – so the relevance of the original article to the larger world is questionable and therefore criticism of it isn’t something we’re going to spend resources on. “We” can see how someone might be concerned about the general issue of the effect of piracy on a good’s sales but that really needs new work.

    That approach is easily defensible: the music business is switching rapidly to subscriptions – and is growing in revenue again because of that – and that growth seems directly related to if not caused by much faster data transmission rates and increasingly high data caps (now trending towards “unlimited”). There are interesting questions about how people consume music: what we might call narrow access, meaning you buy physical and/or buy digital, versus wide access with its flavors of premium subscription or “freemium” ad-supported and how buying choices allocate, etc. (Some questions are specific: I wonder for example how Sirius XM competes as cars enable cellular or wifi and you can use your own music subscription? I wonder how the growth in “premium” or “collector” CD and even LP sales will trend. More generally, I wonder about the changes in music purchases per individual and how these segment, etc.)

    And generally: it is, I agree, really hard to believe that paper was right because the alternative is that somehow people just stopped buying music to the tune of billions of dollars a year in the US alone.

    • Jonathan (another one) says:

      Just asking, but does it make any difference to you *why* the paper turned out to be wrong? Should it make any difference to the editors? If not, what should they do about the next paper that uses the same methods? And without someone actually managing to publish an article about what is wrong with this article, how do you expect students 20 years from now understanding any of this?

    • Dzhaughn says:

      As noted above, given that the journal published the OS rebuttal a month earlier, it is not credible that is actually their point of view. I suggest their point of view is that this sort of stonewalling is the best approach to business.

  5. anon says:

    Have to go anonymous here for obvious reasons, but part of the problem is that Stan has a well-known….reputation in the community, especially among IP scholars. Basically, he has in much of his work been arguing that intellectual property is far more important, and piracy far more damaging, than essentially any other serious economist would suggest. He argues this point not in the style of academic debate but in the style of active, unhinged hostility.

    Since this paper (and even before), there have been a huge number of papers on piracy and the decline in music sales (Rob and Waldfogel 2006, for instance). The debate is essentially: the late 1990s were strange because of a boom in sales due to CDs (including many repurchases of classic hits). Just as sales declined as other music tech matured in the past, at least some of the decline in music sales is naturally due to the fact that by 2001, everyone who wanted their Stones record on CD has already bought it. Exactly how much is piracy and how much is not? The lit, which includes Stan’s 2006 paper in the Journal of Law and Economics, has used many methods to look into this, and the most compelling studies tend to find that much less than 100% is due to piracy. That is, Stan’s perspective here has (not) been borne out by later studies.

    Stan’s earlier work on piracy is very good, and quite influential, as is his (correct) rejoinder to the idea that path dependence matters in most cases. I think he is generally a careful researcher. Some of his critiques of OS are correct. But most of the critique is already in his (well-cited) 2006 critique in the Journal of Law and Economics, and every serious IP economist has seen that paper. What exactly is to be gained by a 10 year battle over whether X instrument is actually good or not? The point and counterpoint have been available, and have been commented on, ad nauseum. This is not a “the field is protecting bad science” situation, but rather a “there has already been the debate that Stan wants to have, and it does not appear his view is correct.”

    • B D McCullough says:

      If you think Stan’s papers on OG/Strumpf’s paper are written in the “style of active, unhinged hostility”, then you must have a very low threshold for tolerating disagreement. Stan’s papers are well-reasoned and clearly presented. Only OG/Strumpf and their immediate relatives would characterize Stan’s papers as unhinged hostility. The reason Stan persists after all these years is that there is a huge error in the literature (namely, the OG/Strumpf paper) that needs to be corrected. This is not a minor paper. It has been cited a thousand times, and this incorrect paper continues to be cited as if it were correct. If you think that Stan’s JLE paper contains the same information as his papers on OG/Strumpf, you need to reread them all again.

      • Jack PQ says:

        As long as the OS paper keeps getting new citations, the problem remains and Liebowitz is right to persist. The day the field stops citing OS, then we could say we have moved on, as the editor of Information ec & pol says. I am not saying Liebowitz is right, only that it is wrong to say the debate is over or that we have moved on.

    • anon says:

      “… including many repurchases of classic hits.” Some refer to this point as the “librarying” hypothesis. This hypothesis can be tested easily, for example by examining trends in the proportion of record sales coming from new versus old albums.

      “the most compelling studies tend to find that much less than 100% is due to piracy.” I believe that Rob and Waldfogel (2006) instrumental variable estimates imply that “more” than 100% of the observed decline in record sales was due to piracy (music sales would have increased in the absence of piracy).

      • anon says:

        First “anon” here. This is not entirely true. Waldfogel wrote an NBER chapter ( summarizing the literature a couple years ago. In line with what I wrote, “With few exceptions, empirical studies of file sharing in music find sales displacement. The few studies that provide a direct estimate of sales displacement suggest a rate closer to zero than one, however. Rob and Waldfogel’s (2006) best estimates are around −0.2.” This summary looks at Waldfogel’s own work, OS, Liebowitz, and many others. Indeed, the literature has by and large moved on from sales displacement because it isn’t the interesting question: welfare is. And there is almost complete agreement that piracy has not decreased *supply* of new music, which means that piracy (same variety as before, lower de facto prices) may have even raised total welfare!

        • anon says:

          Second anon here. I referred to Rob and Waldfogel (2006) only because this is the paper that you cited in your comment. The instrumental variable estimates in Rob and Waldfogel (2006) do imply an effect of more than 100% (this is independently of what Waldfogel wrote more recently in the NBER chapter or elsewhere). The results found by Liebowitz are not really outliers. Rob and Waldfogel (2016) also report OLS estimates and these imply an effect of about 35% (-0.2 needs to be converted in order to make it comparable to the results in other papers since all these papers use different units that are not directly comparable). This paper by Liebowitz does all these calculations:

          I can believe that piracy raised welfare and I also agree that looking at the supply of new music is important, but I cannot see how this is relevant in this discussion.

    • Stan Liebowitz says:

      Anon1 claims that record sales were unusually high in the 1990s, mainly due to people converting their music libraries to CDs, an activity known as librarying. It is possible that librarying in the 90s ended at the turn of the century, leading to the decline in sales that followed, but it would be nice to see some empirical support for that claim, which I haven’t seen. If Anon1 knows of some evidence, perhaps he/she could share it with us. In my unpublished, early criticism of OS, I provided a test of this hypothesis. I examined the share of older albums (the type that might be converted to new formats) as a percentage of the total share of sales. If librarying ended after 1999, when overall sales began to fall (and when Napster began), the share of sales consisting of older music should have fallen. But the share of old music was higher in every year after 1999 than it was in 1999, and the highest values were in the last year of my data, 2006. This is certainly not definitive, but it is the only test I have seen of this hypothesis, and it contradicts the claims that the end of librarying was an important component of the sales decline.

      Anon1 is correct that my results indicated that piracy was the primary cause of the very steep decline in record sales. He is incorrect in implying that my results are not in the mainstream. I have a literature review in the Journal of Cultural Economics (2016), showing that the average result from 12 studies estimating the impact of music piracy, was that the entire decline in sales was due to piracy [Anon2 references an early version of this article]. Note that it is possible that piracy might have caused a decline when an increase might have been the alternative without piracy, particularly since sales had risen for the previous two decades. In that case, the sales decline due to piracy would be greater than 100%. Peitz and Waelbroeck (2004), Hong (2007), Zentner (2005), Blackburn (2004) and Rob and Waldfogel’s (2006) instrumented results, all had values greater than 100% (there were also many studies with values less than 100%). The derivations of the values are found in my paper. Note that values above 100% DO NOT imply that all pirated files substitute for sales. That measure, a displacement rate indicating what portion of pirated files replace sales, measures something very different, and cannot be used, by itself, to predict the size of decline due to piracy. This is all explained in the Journal of Cultural Economics article.

      Finally, BD’s comment is correct to say that my 2006 JLE says almost nothing about OS except to reference their paper.

    • Rahul says:

      >>> but part of the problem is that Stan has a well-known….reputation in the community, especially among IP scholars.<<<

      Damn! Took me a while to realize this was the human "Stan" and not the software "Stan".

      Was wondering why a programming language got a bad rap among economists.

    • Rahul says:


      Isn’t there a potential conflict of interest here? You head the Center for the Analysis of Property Rights and Innovation (CAPRI) at the Univ. of Texas, right?

      I’m assuming that a significant share of your funding comes from donor-corporates who have a commercial interest in concluding that piracy is evil? Please correct me if I am getting the facts wrong.

      • Dzhaughn says:

        Perhaps you can be a bit more clear about what you are suggesting here, Rahul.

        I would imagine someone like a record company would be delighted to find out that “piracy” is *not* significantly hurting record sales, and so they could reduce the cost of enforcing copyrights. The reality is that they profit best when they know exactly how much it costs them.

        • Rahul says:

          I think you are wrong. Record companies strongly want to portray piracy as this big menace that hurts them badly. This isn’t about “knowing” but about having evidence to show courts, legislators and people, when the Record Industry wants to lobby for draconian anti-piracy measures.

          Proof for this is just look at the trade organization representing the recording industry: Recording Industry Association of America (RIAA). They lose no opportunity to lobby to portray piracy as a huge evil causing millions of dollars of losses to the industry.

        • Rahul says:

 simpler words:

          Here’s a guy who’s paid by the recording industry, and he’s telling us that piracy hurts their record sales.

          Count me skeptical.

          • Shravan says:

            Doesn’t the pharma industry employ statisticians? Do they have a conflict of interest when they evaluate internal pharma data? I would imagine that at least some pharma operations would allow complete independence between the statistical analysis wing and the drug development wing? I hope so.

            It could be that Stan is independent in that sense even if employed by them.

            • Rahul says:

              But isn’t that the whole point why we take pains to differentiate analysis by a disinterested third party versus someone who has a dog in the fight?

              That’s what conflict of interests statements are for, right? Hasn’t history shown a lot of examples how economic interest can bias analysis?

              The industry has a unsavory record of trying to hijack the story-line by swaying university researchers with funding dollars or the threat of cutting them. Anti-piracy lobbying budgets are in millions of dollars.

              All I’m saying is that through this blog post Stan Liebowitz’s conflict of interest was not clear. Even in his papers this conflict is not always obvious. e.g.

              • Shravan says:

                I’m with you on your broader point, I’m just saying he could be independent. He seems to be getting his salary from UTexas. If he’s funded by the industry, he could feel the pressure to support them to keep getting funding. But in that sense every researcher has a conflict of interest; they need to tell a good story to keep getting funding.

                Maybe he should publish with a scientific opponent as co-author to balance out the bias.

                PS Wow, his home page is totally dysfunctional. Makes me think of the 1990s, when we were writing our first home pages by hand in raw html.

              • Shravan says:

                I agree Rahul that he should have declared any conflict of interest in the article he has posted. Hopefully he will when he publishes it otherwise I suppose he will get into trouble with his ethics board.

                I wonder if I should also add a conflict of interest statement to my papers to the effect: “I declare a conflict of interest in wanting to find results consistent with my prior beliefs in order to continue getting funding and for enhancing my fame and salary, which would result in improved access to wine, women, and song.” Because really, jokes aside, the conflict of interest is there.

              • Rahul says:


                You only declare *atypical* conflicts. That’s the point. Most economists aren’t invested in recording industry funding.

                You want to declare material conflicts that your audience may not no about but a credible expectation exists that they would like to know.

                Your conflict is run-of-the-mill. Mundane. Most everyone faces that conflict. No point in declaring it. It’d be stating the obvious.

              • Shravan says:

                Rahul: “Your conflict is run-of-the-mill. Mundane. Most everyone faces that conflict. No point in declaring it. It’d be stating the obvious.”

                Mundane, sure. But these mundane conflicts leads to a near-maniacal defence of one’s position to the point that one get embroiled in ridiculously detailed arguments about minutiae, not unlike the one unfolding in Stan’s papers. It would probably do the author good to write out each time how much of this mundane conflict of interest is driving his/her work.

          • Michael says:

            Are you also skeptical of Oberholzer Gee and Strump’s results since it appears they got their data from a company involved with piracy? Shouldn’t you read the papers and make an informed judgement instead of casting cheap aspersions?

      • Martha (Smith) says:

        The University of Texas at Dallas.

  6. Michael says:

    To be fair to IEP, nothing they said indicated that the response wouldn’t be allowed to reference the OS 2007 paper, just that the response had to also add something novel in its own right and not exclusively criticize the OS 2007 paper. I still don’t think it’s a wise idea, but it’s not as Stan represents it.

    • Stan Liebowitz says:

      I think Michael should familiarize himself with the details before offering an opinion. My submission to IEP was focused on claims made by OS in their December 2016 IEP article, not the 2007 article. Their 2016 article had criticized my Econ Journal Watch article from September 2016. I think it would help to read these articles before forming opinions, which is why I provided the links.
      Normally, you are allowed to defend your work when someone criticizes it. In conformance with these norms, IEP seemed to be inviting me to respond to OS’s critique of my EJW paper. You can check for yourself whether you disagree with me that OS’s claims are chock full of evasions and errors, if you read the papers.
      IEP now says that it will not publish my rejoinder and does not seem willing to let me defend my work against OS’s criticisms. That would be OK if they had found some errors in my rejoinder, but they did not. Instead, they are now claiming that they want new work on the overall subject, which would be fine for a normal submission, but not for a response to a critique of my work that they just published. I think most observers would agree that an invited response to someone’s criticism of your work means you get to focus on defending your work against those criticisms.

  7. Jack PQ says:

    Re: “gangster behavior”, there is a trend, which seems most associated with (some) Elsevier journals, whereby editors pressure authors to cite papers from their own journal. In fact this has led the editors of the top 3 finance journals to write a joint editorial denouncing what they call “coercive citations”.

    I figure the problem would be solved if impact factors removed own-journal citations. Some journals report those corrected impact factors and indeed it makes a difference.

    • Rahul says:

      Rather than fix impact factor calculations just stop using impact factors!

      How many abuses will we keep stopping! I’ve heard of reviewers gently suggest one of their own papers. Or one from a collaborating group.

      In fact the things are so bad that I’ve known PI’s to tell grad students to add citations of people they suspect will be called upon to review. In a small, specialized field it’s not too hard to guess who it will be. And worse, sometimes the authors are themselves asked to suggest potential reviewers.

      Stopping impact factor gaming is a futile exercise. Just stop paying so much attention to impact factors in the first place!

      • Christian Hennig says:

        “How many abuses will we keep stopping! I’ve heard of reviewers gently suggest one of their own papers. Or one from a collaborating group.”
        I have done this as reviewer and I don’t think that it’s abuse. As Associate Editor I’ll ask reviewers that are experts, and mostly they have written something related. Reviewers that have written something relevant but are not cited are often particularly suitable because they have a view on the topic that differs from the view of the author, and I’d expect them to highlight their own work in their review if they think the author should know it.
        Obviously there *is* abuse, too, in case the reviewer suggests their own unrelated work or asks the author to cite five papers where really one is enough to make the point.

        By the way, I also have seen Elsevier editors asking for looking for references in the same journal in particular and this is abuse indeed.

        • Keith O'Rourke says:

          Christian: Good point, I often feel awkward referencing my own papers in a review but often they are the only things _I know of_ that direct address the issue that I am raising. Some time I ask the editor to decide if they should be left in.

          So a rule not to cite oneself in review would cause some harm.

          Alternative facts and alternative scholarship everywhere – and not a obvious way to discern/undo/curtail…

Leave a Reply