Quantity over Quality: How Publication Activity Leads to Crisis

Participants of the 4th Fall into ML conference at HSE University held a discussion titled ‘Academia in Crisis: What Does the Future Hold?’ In particular, they examined why the number of scientific publications continues to grow, what the quality of these papers is, what expectations should be placed on researchers, and what role artificial intelligence plays in preparing academic articles.
Alexey Naumov, Director for Basic Research at the HSE Faculty of Computer Science’s AI and Digital Science Institute, noted that in recent years the academic world has witnessed an ‘explosive growth in publication activity.’ According to the latest records, ‘the growth over the past few years has been threefold. But there is a feeling that the share of genuine science in such publications—many of which now use artificial intelligence—is steadily shrinking. You open an article and see that it has been written using ChatGPT,’ he said. Such trends stem from established KPIs that require universities to produce ever more cited papers. ‘This raises the question: what should a modern university do?’ asked Alexey Naumov, setting the tone for the discussion. Another problem he highlighted is the rapid obsolescence of information in today’s world.
Vladimir Spokoiny, Academic Supervisor of the Laboratory for Theoretical Modelling in AI at the AI and Digital Science Institute (HSE), agreed that the surge in publications often raises the question of how to tell whether something is ‘real science or not.’ ‘We discover the laws of nature. But can ChatGPT discover the laws of nature? I doubt it. We position ourselves as people who strive to uncover the laws of nature. If your information quickly becomes outdated, it means you have not discovered those laws.’
According to Vladimir Spokoiny, when KPIs for publication output are in place, a student who delivers the required number of papers is deemed ‘just what is needed,’ regardless of their quality. He shared his own experience in recruitment, recalling cases where a student with publications gave a 15-minute presentation on their achievements—and that alone was considered sufficient to assess their scientific merit. ‘We should think about this carefully—and there is no need to reinvent the wheel. There is a scientific tradition that has been refined over decades, even centuries. It is this tradition that will help us survive in the age of information deluge,’ Vladimir Spokoiny concluded. ‘In any case, we will not be able to compete with China in terms of the number of publications—they will simply overwhelm us.’
‘The state needs independent expertise, but it also needs to evaluate the experts themselves—the people providing it. In other words, it is necessary to make sure that a researcher who presents their work has not done it merely for personal satisfaction, but has actually produced something genuinely useful,’ said Sergey Samsonov, Head of the International Laboratory of Stochastic Algorithms and High-Dimensional Inference at HSE University. ‘This has led to people writing more, and therein lies the answer to the question of how, roughly speaking, a million publications on artificial intelligence have appeared. This has set off a self-sustaining spiral of publications—we ourselves have fuelled this inflation.’
Gleb Gusev, Director of Sber AI Lab, shared his own method for judging how useful a newly published paper is. ‘A scientific article should teach you something. It must contain novelty. If I read a paper and can draw no conclusions from it, then it’s not a scientific paper.’ Current artificial intelligence tools cannot truly ‘get to the heart of an article,’ although they are useful for creating summaries.
He also agreed that interviews can be highly revealing. There is a common practice, he explained, of asking candidates to name their three key scientific achievements. ‘You cannot name more than three. You might have hundreds of papers, but if they are all mediocre, you will lose out,’ he said, echoing the point made earlier by Vladimir Spokoiny.
‘It would be good if the race for publications did not turn into what happens in sport,’ said Alexander Korotin, Head of the Research Group at the Skoltech Applied AI Centre. ‘Athletes have to train constantly to show results, while in science we are racing to meet KPIs by the deadline. As a result of this race, figure skaters after the age of 18 are seen as used-up material—and we do not want that to happen to scientists.’
However, Ivan Stelmakh, Head of Mathematics and Computer Science Courses at Central University, disagreed. ‘If there is no competition or deadlines, researchers could spend five years writing a single paper,’ he argued. ‘Do you want progress—or for scientists simply to feel comfortable? Companies want to pick as many fruits as possible and explore new fields in the future, so they support universities. I would argue that academia is in crisis. Companies are opening new fields precisely thanks to what is being done in academia.’