On the use of generative AI in academic publishing

February 23, 2026

The Privacy Enhancing Technologies Symposium is considering an update to its policy regarding the use of (generative) AI. I have some problems with it.

The use of tools is common in science. But the use of generative AI as a tool is different in that it can also be used to actually write (parts of) an academic paper. And apart from hallucinations in the bibliography, it may be hard to reliably detect this. When reviewing a paper I sometimes have a gut feeling it is (partly) AI generated, but I have no way to be sure and prove this.

Similarly, I could use generative AI to write a review for a paper, to alleviate this sometimes tedious task. Especially for reviewing the use of generative AI is tempting: the publishing system is already crumbling under an increasing load of paper submissions that have to be reviewed thoroughly and fairly. And when reviewing a paper that is not exactly within my area of expertise (which is often the case), the use of generative AI could perhaps help in comparing the submission to the state of the art.

The proposal for an AI policy does not really solve these problems, but it aims to make explicit what we as a community expect from authors and reviewers alike. The problem is that, as it stands, the policy disadvantages reviewers to the point that the whole academic publishing system will collapse.

Why do I believe this is the case?

First, reviewers are not allowed to

upload submissions or any parts of them to any third-party service, be it AI-related or not.

Moreover, the policy requires that

The substance of reviews must originate with the reviewers themselves, not (local) AI tools or sub-reviewers.

In other words, the use of local use of AI tools is severely restricted for reviewers.

In contrast, authors are permitted to use (third party) AI tools, provided they disclose how they used them. There is no requirement that the substance of the paper must originate from the authors themselves. They are solely responsible for the output.

This means that the policy makes it easier for authors to write even more papers, while not giving reviewers any way to make their task easier. In fact, a convincingly written bullshit paper is easy to write with the help of generative AI. It is much harder to review and reject though (even if it is not written with the help of generative AI). It takes much more time to find the flaws in the argument, and it is much harder to convince the other reviewers that this seemingly nice paper is actually not that good.

The policy permits the use of generative AI without restrictions. When followed to the letter, a paper written entirely by generative AI, that clearly discloses this, and where the authors take full responsibility for the accuracy, originality, and integrity of the submitted paper, appears to be acceptable. Clearly that’s not intended?

The Association for Computing Machinery’s Criteria for Authorship at least require that authors

have made substantial intellectual contributions to some components of the original Work described in the manuscript, such as contributing to the conception, design, and analysis of the study reported on in the Work and participating in the drafting and/or revision of the manuscript.

(But perhaps some people already consider heavy prompt engineering equivalent to ‘conception, design, and analysis’…)

I am particularly concerned about the use of generative AI to write the so-called state of the art section, where authors explain how their results compare to and improve previously known scientific results. I will happily admit that for me writing this section is the least enjoyable part of the writing process. As the number of published paper appear to be exponentially increasing, even experts in the field will have trouble to keep track. And there is a page limit to stay within to, so if you want enough space to explain and sell your own results, this section needs to be short yet comprehensive.

Yet the point of writing the state of the art section is not just to convince the reviewers that you know what you are talking about, or to tell the reader where to look for previous papers that your research builds upon. The point is that in order to write it, you must have actually read and engaged with that research. That’s what doing research is all about! If you use generative AI, you can skip this essential step.

So my question is: if the authors declare the state of the art was written with the help of generative AI, shouldn’t I as a reviewer be able to reject it?

At the very, very, least the policy should state that ‘the substance of a paper must originate with the authors themselves, not (local) AI tools’. But perhaps the guiding principle should be reversed. Instead of a general rule allowing the use of generative AI (provided certain requirements are met), a general rule forbidding the use of generative AI (except for a set of clearly described cases) should be put in place instead.

Such a rule is currently hard to enforce. (But so is the rule to be fully transparent about the use of generative AI.) But I have the feeling we are giving up too easily on the fact that the act of writing itself is an essential part of thinking and thus of making scientific progress.

As may be clear from the above, I have my reservations about the use of generative AI. But if there is one part of the scientific publishing quagmire that could benefit from use generative AI - to deal with the tsunami of paper submissions - it is the reviewing process. To be clear, I am being (somewhat) cynical: reviewing scientific papers is a serious obligation that should be done fairly and honestly, and should not outsourced to a system that we have little control over and for which we do not know with which biases it was trained, or which guardrails steer its output. Yet, the policy allows to do exactly that for the writing scientific papers.

It takes a true masochist to believe he can give automated typewriters on steroids to a gigantic mob of monkeys, and tame them when armed with just a pair of good spectacles and goose quill.

In case you spot any errors on this page, please notify me!
Or, leave a comment.