I’ve been mistaken for a chatbot

… Or not, according to what language is allowed.

At the start of the year I mentioned that I am on a bad roll with AI just now, and the start of that roll began in late November when I received reviews back on a paper. One reviewer sent in a 150 word review saying it was written by chatGPT. The editor echoed, “One reviewer asserted that the work was created with ChatGPT. I don’t know if this is the case, but I did find the writing style unusual ….” What exactly was unusual was not explained.

That was November 20th. By November 22nd my computer shows a file created named ‘tryingtoproveIamnotchatbot,’ which is just a txt where I pasted in the GitHub commits showing progress on the paper. I figured maybe this would prove to the editors that I did not submit any work by chatGPT.

I didn’t. There are many reasons for this. One is I don’t think that I should. Further, I suspect chatGPT is not so good at this (rather specific) subject and between me and my author team, I actually thought we were pretty good at this subject. And I had met with each of the authors to build the paper, its treatise, data and figures. We had a cool new meta-analysis of rootstock x scion experiments and a number of interesting points. Some of the points I might even call exciting, though I am biased. But, no matter, the paper was the product of lots of work and I was initially embarrassed, then gutted, about the reviews.

Once I was less embarrassed I started talking timidly about it. I called Andrew. I told folks in my lab. I got some fun replies. Undergrads in my lab (and others later) thought the review itself may have been written by chatGPT. Someone suggested I rewrite the paper with chatGPT and resubmit. Another that I just write back one line: I’m Bing.

What I took away from this was myriad, but I came up with a couple next steps. I decided this was not a great peer review process that I should reach out to the editor (and, as one co-author suggested, cc the editorial board). And another was to not be so mortified as to not talk about this.

What I took away from these steps were two things:

1) chatGPT could now control my language.

I connected with a senior editor on the journal. No one is a good position here, and the editor and reviewers are volunteering their time in a rapidly changing situation. I feel for them and for me and my co-authors. The editor and I tried to bridge our perspectives. It seems he could not have imagined that I or my co-authors would be so offended. And I could not have imagined that the journal already had a policy of allowing manuscripts to use chatGPT, as long as it was clearly stated.

I was also given some language changes to consider, so I might sound less like chatGPT to reviewers. These included some phrases I wrote in the manuscript (e.g. `the tyranny of terroir’). Huh. So where does that end? Say I start writing so I sound less to the editor and others ‘like chatGPT’ (and I never figured out what that means), then chatGPT digests that and then what? I adapt again? Do I eventually come back around to those phrases once they have rinsed out of the large language model?

2) Editors are shaping the language around chatGPT.

Motivated by a co-author’s suggestion, I wrote a short reflection which recently came out in a careers column. I much appreciate the journal recognizing this as an important topic and that they have editorial guidelines to follow for clear and consistent writing. But I was surprised by the concerns from the subeditors on my language. (I had no idea my language was such a problem!)

This problem was that I wrote: I’ve been mistaken for a chatbot (and similar language). The argument was that I had not been mistaken — my writing had been. The debate that ensued was fascinating. If I had been in a chatroom and this happened, then I could write `I’ve been mistaken for a chatbot’ but since my co-authors and I wrote this up and submitted it to a journal, it was not part of our identities. So I was over-reaching in my complaint. I started to wonder: if I could not say ‘I was mistaken for an AI bot’ — why does the chatbot get ‘to write’? I went down an existential hole, from which I have not fully recovered.

And since then I am still mostly existing there. On the upbeat side, writing the reflection was cathartic and the back and forth with the editors — who I know are just trying to their jobs too — gave me more perspectives and thoughts, however muddled. And my partner recently said to me, “perhaps one day it will be seen as a compliment to be mistaken for a chatbot, just not today!”

Also, since I don’t know an archive that takes such things so I will paste the original unedited version below.

I have just been accused of scientific fraud. It’s not data fraud (which, I guess, is a relief because my lab works hard at data transparency, data sharing and reproducibility). What I have just been accused of is writing fraud. This hurts, because—like many people—I find writing a paper a somewhat painful process.

Like some people, I comfort myself by reading books on how to write—both to be comforted by how much the authors of such books stress that writing is generally slow and difficult, and to find ways to improve my writing. My current writing strategy involves willing myself to write, multiple outlines, then a first draft, followed by much revising. I try to force this approach on my students, even though I know it is not easy, because I think it’s important we try to communicate well.

Imagine my surprise then when I received reviews back that declared a recently submitted paper of mine a chatGPT creation. One reviewer wrote that it was `obviously Chat GPT’ and the handling editor vaguely agreed, saying that they found `the writing style unusual.’ Surprise was just one emotion I had, so was shock, dismay and a flood of confusion and alarm. Given how much work goes into writing a paper, it was quite a hit to be accused of being a chatbot—especially in short order without any evidence, and given the efforts that accompany the writing of almost all my manuscripts.

I hadn’t written a word of the manuscript with chatGPT and I rapidly tried to think through how to prove my case. I could show my commits on GitHub (with commit messages including `finally writing!’ and `Another 25 mins of writing progress!’ that I never thought I would share), I could try to figure out how to compare the writing style of my pre-chatGPT papers on this topic to the current submission, maybe I could ask chatGPT if it thought I it wrote the paper…. But then I realized I would be spending my time trying to prove I am not a chatbot, which seemed a bad outcome to the whole situation. Eventually, like all mature adults, I decided what I most wanted to do was pick up my ball (manuscript) and march off the playground in a small fury. How dare they?

Before I did this, I decided to get some perspectives from others—researchers who work on data fraud, co-authors on the paper and colleagues, and I found most agreed with my alarm. One put it most succinctly to me: `All scientific criticism is admissible, but this is a different matter.’

I realized these reviews captured both something inherently broken about the peer review process and—more importantly to me—about how AI could corrupt science without even trying. We’re paranoid about AI taking over us weak humans and we’re trying to put in structures so it doesn’t. But we’re also trying to develop AI so it helps where it should, and maybe that will be writing parts of papers. Here, chatGPT was not part of my work and yet it had prejudiced the whole process simply by its existential presence in the world. I was at once annoyed at being mistaken for a chatbot and horrified that reviewers and editors were not more outraged at the idea that someone had submitted AI generated text.

So much of science is built on trust and faith in the scientific ethics and integrity of our colleagues. We mostly trust others did not fabricate their data, and I trust people do not (yet) write their papers or grants using large language models without telling me. I wouldn’t accuse someone of data fraud or p-hacking without some evidence, but a reviewer felt it was easy enough to accuse me of writing fraud. Indeed, the reviewer wrote, `It is obviously [a] Chat GPT creation, there is nothing wrong using help ….’ So it seems, perhaps, that they did not see this as a harsh accusation, and the editor thought nothing of passing it along and echoing it, but they had effectively accused me of lying and fraud in deliberately presenting AI generated text as my own. They also felt confident that they could discern my writing from AI—but they couldn’t.

We need to be able to call out fraud and misconduct in science. Currently, the costs to the people who call out data fraud seem too high to me, and the consequences for being caught too low (people should lose tenure for egregious data fraud in my book). But I am worried about a world in which a reviewer can casually declare my work AI-generated, and the editors and journal editor simply shuffle along the review and invite a resubmission if I so choose. It suggests not only a world in which the reviewers and editors have no faith in the scientific integrity of submitting authors—me—but also an acceptance of a world where ethics are negotiable. Such a world seems easy for chatGPT to corrupt without even trying—unless we raise our standards.

Side note: Don’t forget to submit your entry to the International Cherry Blossom Prediction Competition!

30 thoughts on “I’ve been mistaken for a chatbot

  1. I probably haven’t read enough content generated by ChatGPT to tell it apart from writing by a warm body. But I have read plenty of papers where the style is a bit unusual, stilted, or filled with jargon. I’m not sure my reaction if someone sent me a paper with ‘unusual’ writing style would be to assume the use of ChatGPT. Is its use so widespread already that this is a first-order question to ask about a paper doing the review process?

    Also, I like “tyranny of terroir”!

  2. A variation on the theme: I was unable to submit a paper to a particular journal. The automatic submission site required a number of things, such as statements about potential conflicts of interest, allocation of authors’ responsibilities for the paper, etc. I submitted all this but received a message from the publisher that I was missing some things. One, in particular, was the allocation of responsibilities. Since I was the sole author I didn’t think it was necessary, but I then included a statement that I was responsible for 100% of the content. I received notification from the editor that I was still missing that statement. So, I included the statement as a separate file (as opposed to combining all the materials into one document). Same thing happened. So, I contacted the publisher again about my problems submitting. I received a link to the submission instructions URL, which I had already used, of course. A couple of other e mail exchanges ensued with no progress. Then I finally realized that I was communicating with a chatbot, not with a real person, although the publisher had a name of a person that I was supposedly communicating with.

    Then I emailed the editors of the journal personally describing the issues I was having trying to submit my paper. I received no response for either of the joint editors. Then I wrote to the Dean of the school where both editors were faculty members – saying editing is hard work and should be recognized as scholarly activity, but that editors should also be held responsible for the quality of their work. In this case, I said they were not collegial by not even responding to my emails. I then received nothing from that Dean (I didn’t expect to hear anything, but it felt good to write that email anyway).

    We can no longer tell who is writing, who is reviewing, who is editing, or who is commenting.

    • Oh that is interesting.

      > Then I finally realized that I was communicating with a chatbot

      Where I work different teams maintain ask channels in our Slack workspace. It’s a trick to file your questions in a way that you actually get help (“if I say X, I’m definitely getting auto-routed somewhere else, so I will say Y”).

      I remember getting an automated message recently “soon we’ll have resources so you can self-service your request!” which makes me very much not want to self-service my request. Maybe the point of secret chatbots is they can help people self-serve without telling them they’re self-serving, but bleh.

    • “A variation on the theme: I was unable to submit a paper to a particular journal.”

      Tell me about it. Springer seems to have recently become particularly notorious with this practice. While their intention seems legitimate and acceptable (and we should support them, as these practices reduce fraud, abuse, and other misconduct), the enforcing of their own policies is outsourced to low-paid non-academic workers in who-knows-where. Thus, I had to drop several submissions with them, as I couldn’t either make a contact with these workers, or, when I did, I couldn’t understand their copy-pasted “explanations” on what was wrong with a particular submission even after multiple back-and-forth messages.

      “We can no longer tell who is writing, who is reviewing, who is editing, or who is commenting.”

      I think we are not there yet, fortunately.

      As for Lizzie’s comment about AI-generated reviews: any decent editor would (or should) ban such a reviewer, I think. Maybe we need an internal blacklist for these people, or maybe major publishers and major outlets already have them, who knows.

      • “Then I finally realized that I was communicating with a chatbot, not with a real person, although the publisher had a name of a person that I was supposedly communicating with.”

        Note to self: but I do believe these workers were real people, not chatbots. But as it stands today, things may have indeed changed. That brings a further fundamental issue: if major, highly profitable publishers assign real identities, including real names and email addresses, to publishing-related chatbots, we are approaching fraud par excellence, and I am not talking about scientific fraud here.

        • Just a clarification: I believe it was a real person at the journal that I was corresponding with. But I believe they used a chatbot to communicate with me. So, I’m not attributing the behavior to the journal, except that they are willing to allow the practice. If you’ve had similar experiences with this publisher, then it may be becoming a general practice.

          I was even more disturbed by not receiving a response from either of the human editors of the journal. Maybe I should retract some of my complaint: at least chatbots respond.

  3. If any consolation, I have consistently through my career gotten complaints of writing in peer review. Not accusing me of ChatGPT, but comments along the line that my writing is insufferable in maybe something like 1 out of 10 reviews. (I have wanted on occasion to go back and code the peer reviews I received for the proportion of comments that are arbitrary, I cannot stomach reading them though after just a few. Can remember many a sleepless night when I was on the tenure track.)

    Much scientific writing advice I think is vapid to be honest, https://andrewpwheeler.com/2020/12/16/lit-reviews-are-almost-functionally-worthless/. Your writing here in the blog is fine Lizzie, I highly doubt you need to worry about changing your writing to accommodate capricious reviewers.

    • Andy:

      Regarding your discussion of literature reviews: I have a slightly different take than yours. I agree that literature reviews are typically a waste of time, but I don’t think they’re only there to pacify reviewers. I’ve found that student-written research articles are often heavy on literature review, and review in general, and light on what the student actually did. The way I put it is that students typically write the article that they wish they’d been given to read at the start of their project. Part of this might be that students start off by reading textbooks, and so textbook-style writing is their baseline when they start writing their own original work.

      My own Ph.D. thesis had tons of new material but also lots of background; I too was writing the article or book that I wish I’d been given to read before doing the project. I like the thesis—I think it still reads well, and it’s full of cool ideas, including some that I still have not really followed up. But it’s ridiculously overwritten. In this case that was fine because it warmed me up for writing BDA, but that can’t be the general goal for student writing.

        • Raphael:

          Viewed as a combination research and review article, I think the writing in my thesis was just fine. It had lots of great examples and no unnecessary bits, no filler or anything like that. And it demonstrated my understanding of the material, which is one of the purposes of such a document. When I said it was overwritten, I just meant that it had a lot more in it than it needed to have. It could’ve been half the length, just with the results and without all the explanations. The explanations were helpful to me and I guess to readers, except that not so many people will read a Ph.D. thesis anyway. I guess what I’m saying is that, as a written document, the thesis was better than it needed to be. Something much shorter and direct would’ve done the job just as well, even if it wouldn’t have been as good at communicating the content to a reader.

        • Andrew –

          My experience in working with students on their writing has shown me that quite typically it is at the very end of the paper that they’ve really identified their critical and specific thesis.

          At that point, they should take that thesis and establish it towards the beginning of the paper and then basically rewrite the whole paper, eliminating about half (sometimes more) of what they included because it didn’t connect succinctly to the more specific thesis.

          But by that point they’re sick of the paper and there’s a deadline coming up (and the deadline problem is exacerbated because really what they need to do is put the paper down for two weeks and then come back to it with fresh eyes so they can see problems they didn’t see before because they were zoomed in too closely).

          I wonder if that may have applied to your thesis.

          The point for me is that all that material included was critical to the thinking process but not necessarily the final product and people tend to blend those two steps together.

  4. I feel like ChatGPT is becoming a new general-purpose insult about writing quality. A few years ago, a scornful reader would resort to the old “monkeys at a typewriter”, but now it’s trendy to say ChatGPT instead.

    • Adede:

      I don’t think ChatGPT as being an insult; it’s more of a baseline comparison. As discussed in that other recent thread, the papers by Erick Jones were striking in that they were much worse than what you’d expect to see from a chatbot.

  5. Lizzie –

    With no intent to offend, I found the witting in this post to be somewhat idiosyncratic; specifically, a little choppy in the flow. Just one opinion (and I’m not claiming to be a master writer by any means). And I don’t assume that a paper you submitted to a journal is written in a similar fashion as this post, particularly since you worked with others in the manuscript. But…if there is some crossover I could see where someone might think the idiosyncratic writing style might suggest a chatbot. Whereas 5 years ago they might have just thought the writing style was somewhat idiosyncratic and not given it much more thought and instead focused on the content.

    • I was going to say something similar.

      Lizzie, this is a very interesting blog post. From the “tyranny of terroir” reference I assume the article you wrote is about https://statmodeling.stat.columbia.edu/2024/01/01/extinct-champagne-grapes-i-can-be-even-more-disappointed-in-the-news-media/ and I suspect I would enjoy your article and find it informative. I’m sorry your work was attributed to a chatbot and I’m sorry you were offended by that.

      Like Joshua, I hope I won’t be pouring salt on the wound when I say that I agree with Joshua that your writing is a bit ‘idiosyncratic,’ that’s a good word for it.

      So far, so unhelpful. Suppose you want to write less idiosyncratically, what would you do? (And I’m not saying you should! See below!). What’s so idiosyncratic about it? Consider this paragraph:

      “This problem was that I wrote: I’ve been mistaken for a chatbot (and similar language). The argument was that I had not been mistaken — my writing had been. The debate that ensued was fascinating. If I had been in a chatroom and this happened, then I could write `I’ve been mistaken for a chatbot’ but since my co-authors and I wrote this up and submitted it to a journal, it was not part of our identities. So I was over-reaching in my complaint. I started to wonder: if I could not say ‘I was mistaken for an AI bot’ — why does the chatbot get ‘to write’? I went down an existential hole, from which I have not fully recovered.”

      Oh, actually let me select one sentence from that: ” If I had been in a chatroom and this happened, then I could write `I’ve been mistaken for a chatbot’ but since my co-authors and I wrote this up and submitted it to a journal, it was not part of our identities.”

      I can’t figure out what that bit about ‘not part of our identities’ is referring to. I guess it’s a reference to the earlier point that you had not been mistaken [for a chatbot] but rather that your writing had been mistaken for a chatbot’s writing. But then I don’t understand the bit about ‘If I had been in a chatroom and this happened” then things would have been different somehow. I don’t see how. The whole sentence seems…odd. Sorry, I can’t really be clearer than that. I can see how someone would see a sentence like that and say “this seems like something an old-style LLM would come up with, the words all follow each other fine but it doesn’t make sense as a sentence.”

      For what it’s worth, I found this sentence to be idiosyncratic too: “I’ve been mistaken for a chatbot (and similar language). The argument was that I had not been mistaken — my writing had been.” Most native English speakers who want to say “I had not been mistaken for a chatbot” wouldn’t say “I had not been mistaken” without the “for a chatbot” part, not even a single phrase later, because of the dual meaning of “mistaken”. I wouldn’t tell a friend “Some guy mistook me for his friend Frank, but I was mistaken.”

      That said, I strongly agree with you that you should not try to alter your style to sound ‘less like a chatbot.’ It might possibly be worth trying to alter it to be less idiosyncratic, though, which could have the same effect. I find that I have to devote a bit of extra effort to interpreting your writing…which isn’t necessarily a bad thing at all. Try reading Faulkner or David Foster Wallace, or even the person who posts here on rare occasion with all sorts of nutty music references and stuff: yeah, it takes extra effort to read it, but that makes it more rewarding, not less. If you decide to let you be you, that’s certainly fine with me. The world would be a less interesting place if we all wrote the same.

        • Just to be clear, I think it’s just fine also. But I think that it has an idiosyncratic style. That’s only an opinion. I’m not stating it as a fact. It’s more than likely that others might feel differently. Just a subjective observation.

    • For what it’s worth, I don’t think Lizzies writing in this post is at all like a chatbot. In fact, chatGPT is basically like the mathematical average of writing so “idiosyncratic” is antithetical to writing generated from a LLM.

      I just think Lizzie has a distinct voice, and some scientists want or expect homogenous standardization of writing. I’d much rather someone have a distinct voice in their writing than be like the indistinct mass of most scientific writing.

      Also, Joshua and Phil: Though I’m sure you intended to help, I think its in bad taste to critique someone’s writing after they expressed feelings in this kind of way in the blog post, imo. We can all learn from our mistakes, and it’s important to listen to criticism; that said, when the stakes are so low (e.g., no consequences regarding ethics or research integrity when we’re just talking about writing style) its probably best to leave criticism unsaid unless they ask for it.

      • Sean –

        > I think its in bad taste to critique someone’s writing after they expressed feelings in this kind of way in the blog post, imo..

        I was sensitive to that issue. And I was worried about witting my comment for exactly that resson.

        But I want to make it clear that I wasn’t offering a critique. That’s why I chose the word idiosyncratic. As Phil indicated, there’s nothing wrong with idiosyncratic. He indicated that he had to make effort to follow the narrative thread and I felt the same way. Phil said he doesn’t think that’s necessarily a bad thing and I agree. In fact I think people often have a hard time following my writing because I tend to have dependent and subordinate and embedded clauses nested in amongst each other. But that’s how I think and I have mixed feelings about my writing being that way, because it requires more effort to read, but I kind I like that hopefully, it means people think more about what I’m trying to convey. I said I thought the writing in this post was choppy. But I like that to the extent it makes me work harder at understanding, and I think it’s more creative.

        Such features in writing matter differently in various communicative contexts. And a more idiosyncratic style is certainly appropriate for a blog post about feelings. In an academic setting it may or may not be a good thing. It’s hard to say without more information. I think that people wrongly pigeonhole academic within sometimes arcane or useless standards.

        But in a sense none of that is directly related to my main point – which was not at all a critique but a suggestion to make it easier to underhand why a reviewer thought that Lizzie’s writing style resembled a chatbot output.

        What might that even mean? I’ve seen chatbot writing that I though was remarkably effective. I’ve also seen chatbot writing that I thought seemed more obviously chatbotish. So I think it’s not at all clear what that means but I thought it might be helpful if I offered an observation of what it might mean, in a way I thought could be constructive. I certainly didn’t intend to be hurtful and thought it might be worth the risk that Lizzie would see my observation in the light it was put forth.

      • Hi Sean, Thanks for your reply. I much appreciated it.

        I like my writing style too, and I am sure it could be better. I am always working on that, but I didn’t write this post to get feedback on my style. I wrote it to have a conversation about peer review and our language around chatGPT. I don’t feel like that’s exactly happening from these extensive comments on my ‘idiosyncratic style.’ But perhaps it is happening there … it’s just not as useful /insightful as I expected.

        • I’m sorry my comment was unhelpful.

          Joshua, thanks for speaking on my behalf. Like you, I thought it relevant to address the question of how a referee could suspect the writer of the paper was not a person. In my case, I thought a specific example would be helpful. Evidently not.

          Sorry.

  6. > It seems he could not have imagined that I or my co-authors would be so offended.

    That’s a wild story. I like how the norms are so different between people.

    > Like some people, I comfort myself by reading books on how to write

    Recommendation? I was faced with documenting a legacy system recently and got good mileage out of how Bob wrote psuedocode in the post “Simple pseudocode for transformer decoding a la GPT” (on this site). Just hammered away for a couple days and it was done

  7. Lizzie – I am very sorry that you were falsely accused. I think anyone who has ever been falsely accused of something significant can appreciate your feelings. It also really underlines the philosophy of innocent until proven guilty, and it should remind readers of this blog that while it is easy to point out problems with other’s work (often for very legitimate reasons), doing so should be sobering, not done frivolously, and with good evidence.

    I agree with the points that you made in the italicized piece above.

    I would also point out that this review in no way means that your writing actually sounds like it came from a chatbot, other than to the couple of reviewers who said so! I have had many reviews that consisted of reviewers making quite fundamental statistical errors, which I had to patiently respond to. It’s irritating that they falsely accused you, but I actually do not think that is necessarily a reason to be concerned about your writing. It is quite possible that no one else would think the same. In addition, I work with many non-native English speaking PI’s who need some help with editing. I would be curious if a non-native speaker reviewed the paper and thought it was chat GPT for some reason.

    • jd raised an issue that I had never thought of, namely, non-native English speakers who are acting as AI gatekeepers :

      “I would be curious if a non-native speaker reviewed the paper and thought it was chat GPT for some reason.”

      While it is true that local American English has deteriorated markedly–just ask anyone my age–it is quite a turnabout to have gatekeepers who learned the language as a byproduct of a decent education elsewhere.

  8. This helpful info is from today’s Washington Post:

    https://www.washingtonpost.com/technology/2023/08/14/prove-false-positive-ai-detection-turnitin-gptzero/

    “What to do when you’re accused of AI cheating
    AI detectors like Turnitin and GPTZero suffer from false positives that can accuse innocent students of cheating. Here’s the advice of academics, AI scientists and students on how to deal with it.”

    “Stay calm, and let the facts help you.”
    ————————————
    This may not help at all, but it occurs to me that this AI stuff tends to be in English so I suggest that inserting a few appropriate phrases in Danish and Marathi ought to capture attention and indicate uniqueness.

    den hurtige brune ræv sprang over den dovne hund

    Druta tapakirī kōl’hyānē āḷaśī kutryāvara uḍī māralī

    If Indo-European is not exotic enough, try Chinese:

    Mǐnjié de zōngsè húlí tiàoguòle nà zhī lǎn gǒu

    • An insightful column! From the column:

      “I wouldn’t accuse someone of data fraud or statistical manipulation without evidence, but a reviewer apparently felt no such qualms when accusing me. Perhaps they didn’t intend this to be a harsh accusation, and the editor thought nothing of passing along and echoing their comments — but they had effectively accused me of lying by deliberately presenting AI-generated text as my own.”

      In a sense, the insight is old news, although AI may have made it worse. For instance, I was once rejected with a submission involving four positive (or good enough) reviews but a one that accused me (without evidence, as is typical in these cases) of blatant plagiarism. Who knows what is it with such people? In mine and Lizzie’s cases, it might even be a “competitor” trying to intentionally sabotage other’s work.

      And then I’ve had an idea in a rejected manuscript stolen by a group in a far away country who, while doing good enough job, “replicated” the idea, including the methodology for it, almost 95% only about two weeks after my rejected submission. Besides the OA and open science benefits, that’s why I encourage people to deposit to pre-print servers beforehand, although even that won’t stop abuse as precedence, related work, and existing literature do not matter for some people who are in the citation game (or who are forced to be in it due to their country’s policies, etc.).

      But I think we all have similar stories to share.

  9. Lizzy,
    For what its worth, I found your writing, both the content and the style just fine. You or someone upthread commented that having one’s style mistaken for a ChatGPT may become a compliment in the future. I think it’s already true. In my limited experience with ChatGPT, I have found that the bot comes up with a text that flows quite nicely. The problem was the content. The bot made stuff up, exaggerated the points I made in the prompt – and in general poorly reflected the nuances of my ideas. I know that the art of using ChatGPT is to find a good prompt, but I ain’t got no time for that.

    I am perturbed by the fact that, in addition to giving no evidence for why they thought it was ChatGPT, the reviewer apparently made no criticism of the content. Isn’t the content the most important thing in scientific writing? If ChatGPT helps you to express (your) interesting ideas in a way that “flows nicely”, why not let people use it?

Leave a Reply

Your email address will not be published. Required fields are marked *