Generative AI is rapidly lowering the cost of producing academic research. Systems already exist that can execute large portions of the research pipeline: collecting data, running statistical analysis, summarizing related literature, and drafting a manuscript. For many standard empirical projects, the technical barriers that once limited output are diminishing.

This development will likely produce a substantial increase in the number of papers being written and submitted. Some of these papers will be wrong. That is not new; the academic literature regularly contains flawed analyses, incorrect interpretations, and results that fail to replicate. What may change is the scale.

At the same time, some of these papers will be correct and useful. Others may actually improve because of AI assistance. Human researchers, no matter how skilled, routinely make coding mistakes or other errors. Automated systems can reduce certain types of mistakes by, for example, checking code more systematically than individuals typically do.

AI may also lower the cost of evaluating and abandoning weak research ideas early. Researchers can quickly explore literatures, prototype empirical strategies, generate preliminary code, and test whether an idea is viable. This makes it easier to discard unpromising projects before substantial time is invested. In the past, researchers sometimes pushed marginal projects forward partly because so much work had already gone into them. If early-stage experimentation becomes cheaper, more weak projects may be abandoned earlier, potentially raising the average quality of work that ultimately reaches publication.

But, still, if generative AI enables the production of vastly more papers, who will read them? Academic publishing (and publishing in general!) ultimately exists for human consumers: researchers who decide which ideas are worth engaging with and building upon. If a system produces more output than can reasonably be consumed, the excess becomes wasteful.

In practice, the system will likely adapt. Journals can respond by increasing rejection rates or introducing higher submission fees to discourage low-value submissions. Screening processes may become more automated as well. The same generative AI tools that help produce papers can also be used to filter them, identifying methodological weaknesses, statistical errors, or derivative contributions before human reviewers ever see the manuscript.

This raises a broader question about the role of journals themselves. If AI systems become capable of evaluating papers, do we still need journals as intermediaries?

Even in a world of automated review, I think something resembling journals will probably persist. If papers are evaluated by algorithms rather than editorial boards, the key differentiator will remain the credibility of the evaluation system. Few people read or cite articles published in predatory journals because we understand that they publish very low quality research that hasn’t undergone credible peer review.

There’s no reason to expect AI-based systems to be structured differently. There will likely be low-quality systems with minimal standards—analogous to today’s predatory journals—that label many papers as excellent with little scrutiny. Authors will also attempt to game whatever review algorithms become influential. In the short and medium term, automated systems may actually be easier to manipulate than human reviewers, particularly if their evaluation criteria become widely known.

For these reasons, the institutional structures of publishing may evolve but are unlikely to disappear entirely. Instead of journals defined by editorial boards and human referees, the future may include platforms defined by the credibility of their evaluation algorithms and their track record in identifying important work.

Generative AI is therefore unlikely to eliminate academic publishing. Even if reviewing becomes fully automated—which I think we’re still years away from—researchers will care about which platforms produce the most accurate and reliable assessments. Reputation will still matter.

Want to be notified when I write a new blog post? Sign up here.

I don’t spam!