← Previous · All Episodes
Communicable E37: 'Peer review is broken' S2E37

Communicable E37: 'Peer review is broken'

· 01:07:02

|

[00:00:00]

[00:00:07] Annie: hello and welcome back to Communicable. This is the podcast brought to you by CMI Communications, ESCMID's Open Access Journal, covering infectious diseases and clinical microbiology.

My name is Annie Joseph. I'm a clinical microbiologist at Nottingham University Hospitals in the uk, and I'm an associate editor at CMI comms and joining me today, we have Angela Ner, my editor in chief.

Hi everybody. Angela Huttner Infectious disease doctor at the Geneva University Hospital in Switzerland.

[00:00:37] Angela: And happy to be here with Annie. thank you to our listeners for joining us. So for this episode, we are tackling a topic that is relevant to all of us, not just in infectious diseases and microbiology, but in all areas of medicine and more generally in science.

[00:00:52] Angela: And that is the topic of peer review. But before we dive into this topic and meet our experts, I think it'd be a great time, to update you listeners on where our journal CMI comms is at this very moment. So. we're about a year and a half into the journal's life.

[00:01:10] Angela: and we're really, really proud of our journal and our editors. Above all, we've managed to put out more issues than we were supposed to. Our, publisher, Elsevier, is very happy with us. we're now on our, I think, seventh issue since 2024. We were meant to have only two in 2024 and, three, I think in 2025.

[00:01:29] Angela: And we're needing to do more because we're really receiving some good material. we're very proud of our first article series, standing on the Shoulders of Imperfect Humans. There's a history there. You can look it up online, why we call it that. and we will soon launch our second article series conceived by Josh Davis, one of our editors, which will be called the Person Behind the name.

[00:01:48] Angela: And I'll keep it quite cryptic like that. we just had our editors meeting in Rome where we got a lot of, our recent stats from our publisher, and we were very, very happy to hear, that indeed our turnaround times are still really, really good. And again, this is thanks to these fantastic editors, 22 days until first decision, and that includes a first decision with peer review.

[00:02:11] Angela: So that first decision is, including, revision, et cetera. So happy to congratulate ourselves on doing a really good job. Indexing is ongoing. We are very happy that we, were able to move to indexing quite, quickly because we, have a lot more citations than we would've needed to start the process.

[00:02:30] Angela: So we got going on that early. We are already indexed in the DOAJ, which is the directory of Open Access Journals, and we are now being evaluated for PubMed, Scopus, and Web of Science. So we're really excited.

[00:02:43] Angela: Anyway, thanks Annie.

[00:02:44] Angela: for, the opportunity to update people , on our baby journal, which is maybe now in its toddler

[00:02:49] Angela: years. So now on to our topic for today's episode. In these current times of growing scientific skepticism, it's important that we take time to reflect on the current system of academic publishing as the primary way that scientific advances are communicated within and outside of our specialist fields.

[00:03:07] Annie: As journal editors, finding suitable reviewers to lend their services for free is never easy. As peer reviewers ourselves, often finding the time to do this important role amongst all our other responsibilities is really difficult as well. Yet progress in medicine and science relies upon this or so we are all told, but does it and why do we peer review and is there another way or a better way?

So to help us dive into the past, present, and future of peer review, hopefully we have two experts in the field and they both have, very different backgrounds and are both from outside of medicine, which I think might be a first for, communicable. I hope this will help us to see the issues around peer review through some different lenses.

[00:03:52] Annie: So firstly, I'm delighted to welcome Professor Melinda Baldwin to Communicable. Melinda is the a IP Endowed Professor in History of Natural Sciences at the University of Maryland's College Park In the USA. Her academic book focuses on the history of science, particularly in the 19th and 20th centuries, and her first book was about the history of the journal nature.

She's currently writing a book on the origins and rise of Peer Review, so it's fantastic to have her with us on this episode. Hi Melinda.

[00:04:25] Melinda: Hi. Thank you so much for having me,

[00:04:27] Angela: and I'm delighted to welcome our second guest, Dr. Serge Horbach. Serge is an associate professor at the Institute for Science and Society at Radboud University in the Netherlands.

[00:04:37] Angela: His academic focus is the intersection of science, ethics, and communication. He's a member of the European Association of Science Editors Committee on Peer Review and is currently investigating the evolution of peer review in response to changing technology and culture. Serge, welcome to Communicable.

[00:04:55] Serge: Hi. Thanks for that introduction, Angela.

[00:04:57] Serge: Very happy to be here.

[00:04:59] Annie: So, as our listeners know, we usually start these episodes with a get to know you question for the guests as our guests this week are both from the world of academia rather than from clinical ID or microbiology. my question this week is, what do you wish people outside of your field knew about your field?

[00:05:18] Annie: So, Melinda, any thoughts?

[00:05:21] Melinda: the main thing that I would love for people to know about history is that it's less about the what than the why. I think that people have an idea that history is about memorizing dates and reciting facts that, a historian should be able to come up with, exact dates on the fly, things like that.

[00:05:38] Melinda: And it's very much more about thinking about why events unfolded the way they did, looking for the reasons why, for example, scientists started looking to peer review as a method of evaluating submissions to journals or evaluating, grant applications.

[00:05:54] Angela: Nice. I love that. Serge.

[00:05:57] Serge: Yeah, so about social studies of science or particularly the social studies of scholarly communication, I think people would be interested or maybe should be interested in to be aware about the diversity of the sciences and the fact that science is so broad so different and the ways in which also we do peer review, uh, and publishing is so widely different.

that also then comes with very different expectations that people have of scholarly communication in the peer review system. For sure, the system is there to weed out bad science or. To select the good research depending on how you look at it, but it also plays a major role in developing research communities, setting academic norms, setting standards, maintaining these communities.

So it's crucially social endeavor and I think we'll get back to that a lot during today's discussion.

[00:06:56] Annie: So now we know a little bit more about our guests, and had a bit of a preview of some of their thoughts on peer review. , Anyone who's been involved in research or academic publishing will know how much of the publishing process hinges on peer review and having our fellow colleagues dissect and critique and hopefully improve our work.

[00:07:14] Annie: It's become one of the hallmarks of quality in scientific publishing. and usually these people are anonymous. We don't know who they are. we never get to, acknowledge their contribution. Finding suitable reviewers is undoubtedly the hardest part of being an associate editor at CMI comms, because I'm asking people to give up their time for free, for the greater good.

[00:07:36] Annie: And that's really tricky. So why do we do things this way? Melinda, can you tell us a little bit,, about the origins of the current system of academic publishing and how and when did the journal as we know it first emerge?

[00:07:50] Melinda/Annie: So usually when we talk about scientific journals, we point to two publications as the first, scientific journals. one is the Journal, Val in France, and one is the, philosophical transactions of the Royal Society of London. And both of those were founded in 1665. However, those journals looked very different from the research publications we know today.

[00:08:13] Melinda/Annie: They're very eclectic. They collect things like book reviews, letters, personal recollections. They, don't really look like the experimental types of journals that we know today. If you're looking for the emergence of kind of the modern system of the research journal, you need to look to the 19th century, because the 19th century is really when the modern research journal starts to take shape.

[00:08:34] Melinda/Annie: And there's a wonderful book on this by my colleague Alex Cesar called The Scientific Journal, authorship and the Politics of Knowledge. but really it's the 19th century where we start to see men of science, which was the preferred English language term at the time. we start to see men of science really using the scientific article as the main means of communicating their scientific research.

[00:08:57] Melinda/Annie: It turns out that refereeing for journals. Is a lot younger and a lot more eclectic than I thought when I started studying scientific periodicals. in about the 1830s, a man named William Huell at the Royal Society of London suggested that a way to generate papers for their new publication, the Proceedings of the Royal Society of London, would be to have fellows of the Royal Society write reports about papers that were going to be published in the philosophical transactions.

[00:09:27] Melinda/Annie: And the idea was that this was going to be a public discussion of the latest scientific findings that were being published in the philosophical transactions that doesn't last very long. I think they publish, maybe two or three reviews along this line. What happened was that the Royal Society found those reports far more useful for their own internal and private use.

[00:09:50] Melinda/Annie: And so that I think is the origin of the modern refereeing system as we know it. There are some earlier examples of things like, review prior to book [00:10:00] publication at, the Academy Olson Paris. But really for the modern system of anonymous refereeing, for the purposes of assessing whether an article is worthy of publication.

[00:10:10] Melinda/Annie: It's the 1830s where that starts to emerge. However, it remains really uncommon for a long time. It's really only these learned society journals that ever use that kind of system. For-profit Publications like Nature, for example, really did not embrace refereeing until late in the 20th century, and that was the most surprising finding for me as a historian of peer review.

[00:10:33] Melinda/Annie: my father is a scientist. I majored in chemistry. I kind of grew up surrounded by peer review in a way, and I, just sort of assumed that it had always been there and finding out that there were really high profile periodicals that were just not using referees at all in the 1960s was kind of shocking to me.

[00:10:51] Annie: I think that's quite shocking to me as well. I think, we've always taught that it's the pillar science.

[00:10:57] Angela: what was the model before that then, Melinda? Was it just the editors of a journal doing their own internal review?

[00:11:03] Melinda: Yeah, there were a couple of different ways that it could go.

a lot of European journals had an editorial board that did all of the reviewing. So there was sort of a select group of experts who reviewed all of the articles. So it wasn't anonymous peer review in the way that we would expect it today, because you knew who was on the editorial board and you probably picked that journal based on who you thought was going to read your article.

[00:11:26] Melinda: there were other journals where there was a very powerful editor who more or less made all of the decisions himself. And I'm using the gender term deliberately because it would've been a man at this time.

[00:11:36] Angela: Maybe it wasn't even peer review because you had review.

[00:11:40] Angela: Right. these people on this board couldn't all be experts in the field of the article being submitted, I guess, right?

[00:11:49] Melinda: Yes. I, I think that is one reason that journals in the mid 20th century start to reach out to a broader referee system rather than relying on an editorial board because they're concerned about not having an expert in the right field.

[00:12:03] Melinda: so the Journal of the American Chemical Society, for example, that is a big reason that they start to use referees instead of their editorial board.

[00:12:11] Annie: So was the scientific advances then the drivers for peer review being more widely adopted across all journals then? Was it science was moving so quickly, or were there other pressures going on there?

[00:12:22] Melinda: There were other pressures and what I found studying peer review is that the big reason that it starts to spread in the mid 20th century is not about scientific quality. It is not about any kind of belief that peer review makes a journal better. Instead it's about workload for journal editors.

[00:12:38] Melinda: So, I was in the papers of science, which transitioned from an editorial board system in the mid 1950s to a referee system. And they didn't do it because they were concerned about their lack of expertise. They didn't do it because they looked at other journals that were using referees and thought those journals are better.

[00:12:55] Melinda: They did it because the editorial board was sick of reviewing all these articles. The minute said that the, editorial board members do not find it pleasant or satisfying work to have to read and review all of these papers and suggest revisions on them. You can just sort of feel their exhaustion and exasperation in these minutes, which are usually kind of a dry read, but, they're just sort of fed up with it because something that's happening in the mid 20th century, particularly in the US is that the size of scientific journals is just exploding.

[00:13:23] Melinda: There was so much more funding available from the US federal government. The size of the scientific community was growing rapidly, and journals across fields were just seeing submissions pile up on their desk faster than they could possibly review them. And so the referee system was seen by a lot of editors as a way out of this labor problem.

[00:13:45] Angela: Yeah, fascinating. was anybody compensated financially or was it just these journals by then had such a good reputation that these external reviewers were paid in prestige,

[00:13:55] Melinda: so the external reviewers were paid in nothing.

[00:13:58] Melinda: and that was seen as a feature, not a bug, because the idea was that this was somebody who was totally disinterested in the outcome, that they were not supposed to gain anything. Not recognition, not praise, not, any kind of acknowledgement for their service. They were performing a service for science.

[00:14:17] Melinda: And so that there was a very idealistic and communitarian, motivation behind it. I think that the idea was that you were doing a service to science and there was not supposed to be any expectation that you were gonna get rewarded for it.

[00:14:32] Annie: That's so fascinating, isn't it? That this is just like. The altruism it's making me reflect, like in medicine, there's so much that you're expected to be altruistic about when you go into medicine, and yes, you will always stay late.

[00:14:44] Annie: Yes, the patient always comes first. All of these things. And then on top of that, it's, oh, you also have to contribute in your spare time to science,, give your, time to science on the side as well.

[00:14:53] Melinda: I think that, editors of journals like Science could kind of lean into the fact that Learned Society Journals had been doing this for so long. Because the thing about being a reviewer for a Learned Society Journal, something like, , a PS American Physical Society Journal or one of the Royal Society of London Journals, is that you see it as a service to an organization that you are a member of.

[00:15:15] Melinda: And you sort of have this deep investment in that organization looking good. And producing a quality publication. And so I think that the journals that pick up refereeing a little bit later are able to sort of draw from that ethos and draw from that understanding of why you act as a reviewer for a scientific publication.

[00:15:36] Melinda: So they actually don't have to work very hard to spread the idea that reviewing papers is a service to science, because that's sort of already there. Scientists are already kind of familiar with it. Not everyone though. So one of my favorite peer review stories is about Albert Einstein, who immigrated to the us to escape Nazi Germany.

[00:15:55] Melinda: And he and his co-author sent a physics paper to, physical review the flagship journal of the American Physical Society and their first publications to that journal just got accepted. no review, no comments. Just thank you very much for your article. We will publish it swiftly. There was an article that they sent in arguing the gravitational waves didn't exist.

[00:16:15] Melinda: And that was pretty controversial at the time that article got sent out for referee reports. The editor of the journal sent those referee reports to Einstein, and he said, we sent you our article for publication not evaluation. How dare you send our article to a competitor before it is published based on this incident, I will publish the paper elsewhere.

[00:16:35] Melinda: And he never sent a research article there again. So there were people who definitely, were shocked and a little bit offended when they encountered refereeing for the first time.

[00:16:46] Angela: I would assume though, those were exclusively people of very high standing at that point, right?

[00:16:51] Angela: I mean, he was well known by then.

[00:16:53] Melinda: he would've published most of his early career articles in Germany. In German. And that is another thing that really surprised me about the history of peer review, that it turns out to be a very anglophone story that learned.

[00:17:05] Melinda: Societies in Britain are the earliest places where I've seen anything that I think can be definitively linked to modern refereeing. The US adopts it, partly because of this wide expansion of scientific funding in the US that happens in the 20th century. But scientific societies and journals in Europe and, Japan, Asia, elsewhere in the world, they stick with the editorial model for much longer.

[00:17:30] Melinda: So Albert Einstein was very used to a system where he sent his pieces to something like a journal edited by Max Planck, the great German physicist. And Planck famously was , a very involved editor. He made the decisions himself and like, who was gonna argue that Max Planck couldn't evaluate the physics submissions that came his way.

[00:17:49] Melinda: Right. So that was the system that Einstein was used to. and so I actually have a lot of sympathy for sort of this shock when it turns out that this weird American journal is sending your papers out to competitors instead of publishing them. I understand where the alarm comes from. So I don't think it was kind of a senior physicist temper tantrum.

[00:18:09] Melinda: I think it was just that that's not the system he was used to because outside, Britain and the us, nobody was doing it.

[00:18:16] Angela: That's a good point. And I guess I'm being too hard on Albert Einstein. and there is the whole aspect of confidentiality and this is my work and, those reviewers might be my competitors.

[00:18:27] Angela: so it's interesting, yeah, how, people were dealing with that,

[00:18:30] Angela: new paradigm. So there is a perception that peer review is a hallmark of good science nowadays, but how good is it really? What do you all think? How does it perform? is there evidence on this? Melinda Sergey

[00:18:44] Melinda: Yeah. So this is a really hard question to answer and I think what it boils down to is what do we expect peer review to do One of the conclusions I've reached after researching the history of peer review and how it developed is that the things we expect peer review to do are not the things that it was originally designed to do.

[00:19:04] Melinda: So the earliest refereeing systems are about things like, community and respectability at a particular learned society. It's very much not a system that was designed to do things like detect scientific fraud, but we've assigned peer review all of these functions, and many of them, it just does not perform well.

[00:19:24] Melinda: So, for example, I feel comfortable saying that peer review is a poor detector of scientific fraud. there haven't been many studies on this that I've found, but the available evidence that I have found at places like the NIH indicates that when we find out about scientific fraud, it's usually after publication because someone tries to replicate the experiment or closely scrutinizes the data, or it's a whistleblower in the lab.

[00:19:45] Melinda: I found a study of, cases reported to the NIH of scientific fraud in the 1980s, and very, very few of them started with a referee noticing something amiss. it was much more likely to be somebody in the laboratory blowing [00:20:00] the whistle on what they thought were unethical practices.

[00:20:02] Melinda: And yet, every time that a case of high profile scientific fraud hits the news, people wonder why the peer reviewers didn't detect it. I think that kind of, that gap between. Expectations of peer review and what it was actually designed for is contributing to some of the perception that peer review is failing us in some way, or that peer review is broken.

[00:20:22] Angela: Interesting. Yeah, I mean, I would say it's a big ask to ask one peer who's busy already has a full-time job who's doing this for you, essentially for free, to scope the entire literature and make sure that, these submitting authors aren't, duplicating work or, generating fraud basically.

[00:20:40] Angela: I'm not surprised at all that it's not peer reviewers who are detecting this,

[00:20:45] Annie: i also think if you're willing to sort of commit scientific fraud or you know, academic fraud, then you're probably gonna be willing to cover it up quite well when you're submitting your paper, for a peer reviewer to detect that something's amiss or something doesn't quite feel right.

[00:21:00] Annie: it's only going to get worse with, AI though, isn't it? We're gonna come onto AI in a minute, but this just makes me think, you know, if we can't even do it from the basics. people cannot detect errors in papers. We can't, can we not when we've got computers behind us. I

[00:21:14] Angela: will tell you, when I was an editor at CMI clinical Microbiology and Infection . I sent a paper out to, some different reviewers and one of them came back with a very angry review. And in fact, the entire article, it was an original article, it was an original study. The entire study had been, copied from her own work.

[00:21:35] Angela: and so she spotted it immediately it was given to her in her lap, her own work. I mean, she was furious. And, yeah, that was the one time we picked up fraud very quickly. But it really has to fall right into your lap like that. I think there's no way you're gonna have time to check, the vast literature that we have today.

[00:21:53] Angela: Serge, do you have any comments?

[00:21:55] Serge: I completely echo Melinda here. And then that really depends on what we expecting, especially from the narrow thing of peer review and to detect on a fraudulent research or erroneous, or in any case, problematic, research that is probably a lot to expect from individual people.

maybe we should distinguish here a bit between the peer review process and the editorial process, uh, of which peer review is obviously only a small part, especially in, the larger, publishers, bigger journals. the editorial process will be a fairly long, like almost production chain, in which many people are involved these days.

[00:22:36] Serge: A lot of technologies, are involved, Peer review can only do so much, in terms of detecting problematic papers. But maybe the process as a whole, the editorial process, we can expect a bit more from, so expecting individual reviewers to detect duplicate publications and the like. well apart from cases as you just described, is maybe not very realistic.

But nowadays with plagiarism detection software, we can expect the entire system to do fairly well in that regard. I think that works for more issues. So peer review as a gold standard. Not sure. Um, but the process as a whole, well, we'll get quite a bit closer, I would say.

[00:23:25] Angela: That's a really good point.

[00:23:27] Angela: I mean, we do need to, understand what is editorial review and what is peer review and what should be editorial and what should be peer reviewers work., In my experience as author and as peer reviewer, I find a lot of editors are very hands off and they're basically delegating a lot of the work to the peer reviewers.

[00:23:48] Angela: you know, it's unclear how well that's gonna go because there are some peer reviewers who are extremely minimalist. You know, just write a paragraph here and there. And then there are some peer reviewers who are extremely thorough and they go systematically through the paper and they'll find the little typo in the 14th table in the supplement.

[00:24:05] Angela: but I think it's a good point. Maybe we should be questioning all review and not just peer review or if we're gonna overhaul one, we need to overhaul the other as well.

Perhaps some of it is about redefining what peer review is, rather than, saying, peer review is broken.

[00:24:20] Annie: Maybe peer review isn't broken. Maybe as Melinda said, we are just expecting it to do something that it was never designed to do, but it still has value. I mean, I believe it still has value. I think what's really interesting to me, Serge, you might have some thoughts on this, is how the public perceive peer review and, that public perception of how important and how crucial this is and why don't peer reviewers pick up these errors?

[00:24:43] Annie: have you got any thoughts on the public perception of peer review?

[00:24:46] Serge: Yeah, f that is a very interesting point you made there. so in a recent study we did actually on, public trust in science gathering, lay public and to talk about what might or might not increase or decrease their, their trust in the scientific, system or record.

We found that people are actually quite surprisingly well aware of these kinds of quality control mechanisms in science, that science functions through these kinds of processes where people check upon each other's work. And you can't just by yourself find something and throw that into the record.

so. Before stuff gets to the status of verified facts, there's all these control mechanisms involved. People are somewhat aware of that. and that is then indeed also one of the crucial aspects of why they, put trust in science. Why they consider science or the scientific record to be trustworthy.

so I think in that respect, peer review plays a very important function, also to the non-academic audiences.

[00:25:55] Angela: So it's interesting. Peer review has a better reputation outside the field of science than inside the field of science.

[00:26:03] Serge: Seemed like that. Yes.

[00:26:05] Angela: Interesting.

[00:26:06] Annie: I think that's a really important message, like at this point in science, isn't it? If we can do anything to maintain. Trust with the public around their perception of science and data, then perhaps peer review forms a really important part of that, and maybe that should be something that we should be kind of really, advertising is a good thing about science.

[00:26:28] Annie: If that's something that we will gain public trust, particularly in medicine at the moment.

[00:26:33] Angela: I like the idea of being reviewed externally by peers who know my field, but who are not, Judge and jury to a particular journal or to a particular society.

[00:26:46] Angela: I think there is something healthy about that idea of, sort of decentralizing review of the quality of my work, So the problem though, for us today, of course, with this explosion of, science and, produced papers, finding suitable and engaged peer reviewers is one of the major challenges we have as editors.

[00:27:07] Angela: and of course, as authors, right? we are slow sometimes with our authors because we cannot find reviewers. So how do you think we can improve the experience for peer reviewers? What incentives might work? And this question is for Melinda and Serge, both of you guys.

[00:27:21] Melinda: One of the examples I always point to is the field of economics, where it's actually quite common to pay peer reviewers. my partner is an economist and so I've, gotten to know the economics reviewing system sort of by watching him work. And there are fields where peer reviewers are paid.

[00:27:39] Melinda: I would be really interested to see data on turnaround times and whether we have any solid evidence that this improves things like acceptance of referee report, opportunities and, submission of referee reports in the requested timeframe because, I think that the problem that so many journals face today is that academics are overwhelmed with work.

[00:28:03] Melinda: There are, particularly, well, maybe not particularly in the us I, I feel like this is a worldwide problem. tenure track jobs are becoming harder and harder to find. There's been a lot more casualization of academia, bringing in adjunct faculty to teach courses. And that means that faculty members are being asked to sit on more committees.

[00:28:21] Melinda: They are supervising more students, they are doing more work for their universities. And it's really hard to carve out time to work on a referee report, particularly when you know that your own promotion is going to depend on finishing your research. So, my big solution to the problem is to have more tenure track jobs and more supporter and more kind of support for the scientists whose labor props up the system.

[00:28:46] Melinda: But that's very high in the sky. I One of the things we have to remember is that a lot of academic systems, including peer review, were designed around the assumption that an academic was a particular kind of person, usually a man with a stay at home spouse and very few responsibilities outside their job.

[00:29:04] Melinda: that's not the case anymore, and I think a lot of our publishing systems haven't caught up yet to that new reality

there are some medical journals that pay for a peer review. I think Lancet offers 150 pounds. , I did once say, okay, and they send you all these forms when you review for them. To get that 150 pounds. I opened up those forms. It was like an Excel file with like 130 rows to fill out. It was insane. My time is more precious than that, lovely sum, which, you know, I'm lucky.

[00:29:36] Angela: I live in a country where I can do without that sum. time is more precious to me than money. I would say that thus far what I've seen with the model of paying reviewers in our field, at least, I'm not sure it's really working, but I don't know if any, even observational studies have been published on that, on how that's actually working. I know for me, it doesn't work. it is [00:30:00] absolutely not a motivator you'd have to pay. Thousands.

[00:30:04] Melinda: My husband says the same thing that, the money is nice, but he still only says yes when he feels like he really has something valuable to contribute, to reviewing a particular paper.

[00:30:13] Melinda: It's never the reason he says yes. It's just sort of a nice little cherry on top of the Sundae.

[00:30:19] Angela: Yeah. What we lack is time. Yeah. That is my currency that I don't have any of, you know. Serge, what do you think?

[00:30:26] Serge: Yeah, I think very interesting that you bring up the monetary rewards, uh, as one of the incentives for more peer reviewers or faster peer review.

I've always been skeptical of that, for some of the reasons that you already discussed, and then also what kind of. Other incentives does it also give to people and, the way in which that might distort, the process. so reluctant to introduce these kind of, rewards. especially in places in the world where this sum of money might actually be.

[00:30:58] Serge: Much more than it means to some of us here. . Melinda mentioned that in the end people only say yes. If they have something meaningful to contribute, if a paper seems interesting to read, if you think this is a manuscript, that I'm interested in reading and I have something very meaningful to say about this.

[00:31:18] Serge: Probably indeed. Maybe more so than other people. Um, and then we come back to the older idea that Melinda already also discussed, of a journal playing a major function in communities, in scientific communities. And you're paying a service to that community. Some scholars have described this as a gift economy where it's not about monetary rewards, but it's still about give and take.

you give something to the community in the sense of your attention, your time.

What I was always told was that look, for every paper you publish in a year, you've gotta do two peer reviews.

[00:31:57] Angela: You know, you have to, give back. I was told one

[00:32:00] Annie: in one out, Angela, I didn't realize it was one in two out. Oh, yeah. I'm stepping

[00:32:05] Speaker 4: it up I guess the logic behind the two to one is that more will be reviewed than ever will

[00:32:10] Angela: be published, right? So I, do think a lot of peer review exists on that. Still this sense of duty, this sense of, someone's gonna review my submission, later this year, I've gotta pay it back.

[00:32:22] Angela: And I think it's been standing on that for decades now. and unfortunately as Melinda says, we have this explosion of, productivity, which is wonderful. That is a fortunate thing, I think very few people have that much time, to keep that up.

We talked about this, didn't we actually at, at the editor's meeting about distributive peer review.

you know, once a journal gets big enough and you have enough variety of submissions across different, topics within your field, then. When authors submit, it's almost part of the agreement. Okay. Well I've submitted to this journal. I'll now review, one or Angela would probably say two manuscripts within the next six months that are in a related field that I'm able to do.

[00:33:02] Annie: So it's almost kind of forcing that community to happen rather than, you know, if we want to publish, then you have to peer review. I don't know how I feel about it.

[00:33:11] Angela: Disclaimer. So we had our editors meeting a couple weeks ago in Rome, and full disclosure, we discussed, how can we help our peer reviewer community to want to review for us? because it is always a struggle and we don't wanna be slow for our authors. and that is one of the ideas we discussed. I think it's a really nice idea, but, it's difficult. it's voluntary. but we do kind of hope that, okay, well, you know, our authors, maybe they'll, wanna turn around and, evaluate something in the very same. Subject because people tend to be passionate about what they're publishing on.

[00:33:45] Angela: but yeah, I don't know if journals are doing this yet. Melinda and Serge, do you know if they're modeling this kind of, you submit to us as an author, but we want you to peer review a couple times in the next year?

Don't worry any CMI comms, authors, we're not actually going to force anybody to peer review.

No. I'm unaware of any journals doing this, the idea has been floating around . I do know of some, funding agencies, using this model. though there've been several experiments lately, for instance, Volkswagen Foundation, uh, German funding, body, and some others, they have experimented with exactly this way.

for one funding call, ask all applicants to also act as a reviewer of other applications within that, program.

[00:34:37] Angela: do they only have the winners of those grants review?

[00:34:41] Serge: No, they would review within the same funding call. Ah, so it would be people reviewing each other in the same funding call,

[00:34:50] Angela: conflict of interest?

[00:34:52] Serge: Well, there's some mechanisms in place to avoid strategic behavior, uh, of these reviewers. So they've thought of that they would have quite a lot of reviews of each, application and then they would remove the highest and the lowest, scores et cetera.

[00:35:11] Serge: And yeah. There were fairly optimistic about the way in which that worked.

[00:35:17] Angela: Huh. Interesting. It's not a model. A journal could really adapt. Not literally anyway.

[00:35:23] Serge: Well, maybe for special issues or so, or for Collection of articles and specific call where you would expect people to be in the same subject areas.

[00:35:34] Serge: Mm-hmm. Or be knowledgeable about, because I think especially in the larger journals, some of these mega journals that we have these days that really we publish. Work in any field basically, or at least a very wide variety of that. This whole idea of this community service that is eroding What community is that still, can hardly think of anyone who would feel part of the community of a mega journal. so that then comes with very different kinds of considerations and challenges also for these journals.

[00:36:10] Angela: Are you talking about the MDPI mega journals? have you heard of this?

[00:36:14] Angela: This is very medical publishing.

[00:36:17] Serge: There are publishers in some areas with questionable reputation. Yeah. Um, but also, journals like, nature Communications or plus one, or which are

[00:36:30] Angela: becoming huge, huge operations. Yeah.

[00:36:33] Serge: Yes. tied to a specific field or a scientific community.

[00:36:38] Angela: people do ask me, well, how do I know which journal to submit to? Because now there's so many predatory journals, the safest answer I think is, it remains go to a journal that belongs to a medical society, a a bonafide vetted medical society that's, a not-for-profit,

[00:36:56] Annie: yeah, consider CMI Communications.

[00:36:58] Annie: How about that?

[00:36:58] Angela: Yeah. And shameless plug, we are a Society journal. isn't that funny?

can we talk about open peer review 'cause I had a few questions about that. I thought it was really interesting, Melinda, when you were saying the very first kind of origins seemed to be open peer review and now, peer review, as I know it it was either single or double blinded peer review.

[00:37:19] Annie: So the reviewers are anonymous and sometimes the authors are, you know, anonymous to the reviewer as well. and I think in medicine that's still the default. I'm not sure outside of medicine how much open peer review is now being adopted, but there are some notable exceptions within medicine. The British Medical Journal adopted, open peer review for a, long time with reviewers names and their reviews published alongside the article. and there's lots of debates about whether this is a good thing or a bad thing. So, yeah, I'd love to hear your thoughts on that. I can certainly offer some thoughts, although I, think Serge will have, a lot more to say on this. one of the things that I've noticed with the push for open peer review is that sometimes it goes hand in hand with a push for counting post publication peer review, the same way that pre-publication peer review is considered and.

[00:38:10] Melinda: that always bothers me a little bit because I think that it misses something really important about what peer review does for the scientific community. And it goes back to what Serge said earlier. It's about creating public trust That the idea that an article has been reviewed by experts before it's allowed to get published, that is something that the broader public is aware of.

[00:38:30] Melinda: It is a reason that they feel that they can trust scientific findings.

[00:38:34] Melinda: If we suddenly tell the public that actually scientists can publish their articles whenever, and we'll just comment on them afterwards, that is a major change to how they understand. Whether or not they can trust the claims in a scientific article. And I think that it's unrealistic and potentially harmful to think that we can make that change with just like a snap of the fingers and no follow up questions.

[00:38:56] Melinda: So I'm a little skeptical about post-publication review. I think open peer review has a lot of promise as a model, because it does confer some professional rewards for the reviewer. You can point to a report that you wrote that you were particularly proud of or that you thought was especially thoughtful and say, look, here's the contribution that I have made.

[00:39:15] Melinda: And I think it also decreases the possibility of sort of nasty or personal or perfunctory referee reports. But I'd be very interested Serge to hear what you have to say because I think you've studied this quite a lot more in depth than I have.

[00:39:29] Serge: You know, we have been looking a bit into open peer review models. perhaps, the first thing we should say is that this term, open peer review covers quite a variety of different practices actually. And when people use it, they sometimes refer to different things. So we tend to distinguish 'open identities' peer review as this process where you would disclose who the reviewer is.

And then 'open reports' peer review would be the process where you would [00:40:00] publish reports alongside published articles, and those can but do not necessarily have to go hand in hand. One can have the one but not the other. and then there's also something that people refer to as 'open participation' peer review, which is this process in which uninvited reviewers can contribute.

[00:40:20] Serge: So for instance, through post publication peer review practices where an article is just out there and people can go in and comment on what they feel they have expertise about, where they feel they can contribute something meaningfully, without an editor specifically asking them to do so.

is that post publication or could that be a part of a pre-publication for a journal? I think this might be already happening. Right. Some journals, they post the article online on their own platform and then they say open invitation, people can come and review it, let us know what you think.

[00:40:57] Angela: And then later they commit to publishing it or not.

[00:41:01] Serge: There's some journals experimenting with all kinds of models and slight alternatives to this, practice. Indeed, yes. well for these different parts of open peer review and the different modes of it. There's now some evidence gaining or compiling how effective that is and how Well, for a start answering your question also around how many journals are actually doing this.

still relatively, very few. there's growing momentum, but overall that's still the, a very small minority of journals doing this. and one of the reasons why some journals are somewhat reluctant to do is, that they fear that it might be more difficult to find reviewers, if those people would either have to sign their reviews or make their reviews openly available.

but the evidence we have so far, does not confirm that rather rejects it, people are not less likely, to. Agree on a review invitation, under either identities or open reports models.

My impression in a very anecdotal way is that, first of all, I, do see that our best reviews are often done by mid-career people or even early career people. those are the peer reviewers who are really gonna take the time and really go into depth and really do excellent reviews.

[00:42:22] Angela: and my worry is that those are also the people who will not wanna give up their identity in, for example, small specialty fields like infectious diseases. I can tell you in my country of Switzerland, which is a small country. Every single infectious disease doctor knows everyone else, and to put your name on something that's ultimately, somewhat critical.

[00:42:42] Angela: when you are still junior in a, competitive environment. it's really asking a lot. And this is really my main obstacle with open report peer review. On the whole, uh, it doesn't make people less likely, to accept invitation. It might still change, the reviewer pool in a sense that might be different people who go in it. Yeah. That is, a concern.

we have been looking at, uh, open peer review models for some journals that give it as an option. so where as a reviewer you can decide whether you want your name, to be openly available or not. and there we actually find that, uh, younger scholars are more likely to sign their reviews than older and.

[00:43:27] Serge: We hypothesize that that is partly because of, the sense of reward that you might get from doing review. You show that you have been actively contributing to this field. It's a way of showing that you are a member of that field and then to claim your membership.

and do you know, are those people who are more willing to share their names, are they in broader fields where they would have more job opportunities? I would love to see this, I would feel so much better about encouraging people to reveal their identities.

[00:44:02] Serge: I would have to look into that. but. Interesting question indeed. Whether the size of the fields matters for their likelihood to engage in these open models.

[00:44:12] Angela: I would love for my, entirely anecdotal experience to be wrong, I love that this new younger generation that they are, willing to reveal their names. I think we should have more transparency, if we're politely critiquing, we shouldn't fear retribution or, lack of opportunities later.

[00:44:32] Serge: And this younger generation, is of course also trained in an era where open science, uh mm-hmm. Is a much more common thing than it was for the current senior, researchers. so that might play into this as well.

[00:44:47] Annie: Yeah, I think, I'm struggling to think of. What potential negative impacts there are of publishing the review.

[00:44:54] Annie: So having the open reports, even if the reviewer doesn't want to be identified, but their report is published, alongside, or it's there on the website alongside the manuscript that's finally accepted.

well, surely that would raise the quality as well of what people would review. If you looked at that Serge, does it improve quality of output from peer reviewers? If I knew it was gonna be accessed after the fact published there, even if it didn't have my name on, I think I'd be embarrassed if I'd done a really short review.

It would certainly get rid of the rudeness that people give sometimes, wouldn't it?

[00:45:27] Serge: Yeah. So this is from work, not by myself, but others and. The way in which people write these reports, tends to be somewhat more constructive, when these reports are being published.

also when you ask people about whether they like open peer review and the different models of it, then the vast majority tends to be very open to the idea of open reports peer review. And there seem to be not too many negatives there. one of the main benefits of publishing reports, is that it allows the study of peer review.

[00:46:03] Serge: So for people like Melinda, me, and this is great data and that tended to be in the hands of just the journals and the publishers only, and opening that up allows us to study that process much better.

[00:46:18] Angela: There doesn't seem to be a downside to, publishing peer reviewer reports for peer reviewers and for authors .

[00:46:25] Angela: However, I'm thinking. Editors might not love it. and there may, be some, resistance among journals themselves, because once you open that up, , once you,, post those, reviews, then the editor has to actually account for his or her choices, right?

[00:46:44] Angela: You may have editors who know those authors who, maybe don't wanna take all the peer review criticism so seriously, because the author has a certain amount of. Eminence or there's some soft reasons for accepting the paper, or for rejecting it, whichever, I guess the idea is you don't post these reviews for rejected papers, right?

[00:47:05] Angela: So you already have a bias, you are only doing it for accepted ones and then, yeah, and then the journal would then have to answer for its decisions, downstream of peer review. I have to be honest, as an editor and now editor in chief, that does, Add some complexity to the process. I was thinking about it from a reader perspective for the journal. I hadn't really thought about it. If, I sent a paper out to three or four people and I got one really good one back, and then two really rubbish ones, I'd be like, oh man, I'm gonna have to find more reviewers, not more reviewers.

[00:47:36] Annie: I can't publish these. Yeah. Well,

[00:47:39] Angela: that's the thing. I mean, what would the metric be? Right? What if you have three reviewers who write, frankly very cursory reviews and, you know, superficial and they just say reject. or what we're seeing more and more, which is really awful, is we see, this could go to revision, but these authors need to include these four or five citations, and they're all from the reviewer.

[00:47:57] Angela: people kind of just gaming the system. and then you have one reviewer who. says, okay, this needs to be done this X, Y, Z, but then this could be a good paper and the editor may choose to go with that one reviewer because they see that that one reviewer has, gone the extra mile, seems to be less biased, is seeming to do a more fair review.

[00:48:17] Angela: but in the numbers or whatever, you have three others saying, reject it, throw it away. We do sometimes make these decisions, , in a way that is not entirely, metric, some of it is sort of qualitative review. if people come and knock on your door and say, well, this isn't reproducible, and how did you do this?

[00:48:34] Angela: And this isn't fair, that's exhausting. Like already we're working, most of us not for any money. It would be very labor intensive, I think just for the journal and the editorial board. but for everyone else, I think it would be good.

maybe this makes the argument for having a more structured approach to peer reviewers, helping our peer reviewers to review in a way that's, you know, easier for them maybe rather than just writing free text and something that is more objective.

I'm sure there are journals that give a little bit more structure and guidance. I dunno, do you have anything to say about that? About how we solicit peer reviews in a, constructive way?

[00:49:12] Serge: Yeah, there is actually this movement on structured peer review, in journals EASE, the European Association of Science Editors and they provide a toolkit.

Conflict of interest, I'm, with them. who suggest journals on what kind of things could you ask your reviewers to look at and how could you structure that process to guide them towards a constructive and helpful review. then at the same time, you'll also see that reviewers like to have some flexibility if you give them a whole list of mandatory questions, that they should all answer, where some might not be very applicable to the manuscript on the review, and then that might annoy people, then giving them some guidance is probably helpful and, welcomed by [00:50:00] reviewers.

[00:50:00] Angela: So speaking of all of this, what about sort of turning it the other way around, what about the issue of peer reviewers being aware of authors identities and origins?

[00:50:10] Angela: There is blinded peer review, where. peer reviewers do not know who authored the submission. I haven't looked at the literature directly myself very well, but apparently blinding reviewers to author identity, doesn't actually change the quality of peer review.

[00:50:27] Angela: Is that true? or does that change peer review behavior in general? Because sometimes these authors are very eminent, Is it worth it to blind peer reviewers to authors,

[00:50:38] Melinda/Annie: So I think that. It depends a little bit on the ethos of the field. the default in the humanities for a very long time has been double anonymous review, where the author's name is not attached to the manuscript. I asked a colleague to read a chapter of my peer review book, and one of her comments on it was, well, when did the sciences eliminate putting author names on manuscripts?

[00:50:59] Melinda/Annie: They don't still do that, do they? she was shocked. And I think that most people in the humanities find it quite stunning that there are fields where this is not the norm because it's seen as a bulwark against exactly the kind of, sort of elitism that you're describing. But, I'm gonna bring my partner into this.

[00:51:16] Melinda/Annie: Again. He's an economist that's not the standard in his field. And when I asked him why, he said, well, in econ, you can just look up the title in the NBER working papers, and you can find the author anyway. So why would you bother? And I was like, why would you do that? why would you go and look up the title?

[00:51:31] Melinda/Annie: You know, why wouldn't you want to read the paper and set aside the author's identity? But the natural sciences historically were quite resistant to that because for a long time the author's identity and reputation was considered a valid thing to judge when you're deciding whether or not to publish their papers.

[00:51:49] Melinda/Annie: So I interviewed Benjamin Lewin, the founding editor of Cell, and what he explained was that back in the days before, you could have things like online appendices. Often a paper, especially in a journal like Cell, where you're supposed to be pretty succinct, a paper would refer to data that could not be published within the confines of the physical journal.

[00:52:08] Melinda/Annie: And so you were kind of taking it on faith that the author was accurately representing this data that they couldn't fit into the paper. And so natural scientists saw it as completely valid to consider whether or not they felt this person had the right training and a trustworthy reputation. whether or not that's still a consideration that we need to have in these days of online appendices, given the rising concerns about bias against, authors from certain countries or certain institutions.

[00:52:39] Melinda/Annie: think there's a real open question to have, but I will say that we've been doing it in the humanities for a long time and, it seems to work pretty well. I do not generally Google the title of an article that I'm reading to see if I can, you know, stealthily figure out who the author is.

[00:52:53] Angela: it is something that I pushed for at CMI and there was pushback saying, no, no, but in science journals, this doesn't seem to make the quality of peer review any better. but it's something that I think is worth looking in the literature again We can be a little different at CMI comms.

[00:53:10] Angela: It just seems right. because we all do have these biases, right? We do. there's gender, there's country of, provenance of the manuscript. I mean, there are all kinds of biases that we probably do have.

Absolutely Angela, and I think, you know, I will have lots of unconscious biases, but when I get sent a paper to review that's from like a, quite an eminent author or somebody that I really know or have really admired, it makes me work harder on the review.

[00:53:35] Annie: Like it makes me go, I'm gonna really put the time and effort in. Which, you know, really interesting is probably biasing me to do less good reviews and less due diligence maybe on the unknown author or , from the country where I don't know much about the epidemiology of whatever topic it is.

[00:53:50] Annie: So, I think the eminence, might get more effort into the peer review as well.

[00:53:55] Angela: yeah. less critical appraisal probably, right? I mean, well, it could be critical. 'cause I'm anonymous, so That's true. But you've been primed by this person's eminence and you know, and their prior output they are starting from a higher, mm-hmm. , A higher place. Yeah,

[00:54:12] Annie: As Angela said, the most frequent reason for declining to review a manuscript given is just not having the time to do it. but there's the potential to use large language models, chat, GPT, others are available in peer review. And could this actually improve efficiency, save time?

[00:54:30] Annie: there are clear sort of concerns about bias and accountability. Serge, I'd love to hear your thoughts on how, large language modules could fit into a responsible peer review process.

[00:54:42] Serge: Yeah, well clearly this is a very hot topic at the moment and we had peer review week, very recently where this was the topic and that was discussed a whole week long and all sorts of events.

in this case it's very helpful again to make that distinction between the peer review process and the editorial process. where for some parts of the editorial process, using, large language models, generative ai, can be somewhat helpful. Initial checks of whether a manuscript is broadly within the scope of a journal or some checks on research, integrity related issues such as text application, image manipulation, and the like.

that is where these tools can come in very handy. Increase the likelihood that the system lives up to the expectation of detecting problematic research and maybe just increasing some efficiency, because humans wouldn't have to go so deeply into some parts of the paper that they are probably not very good at in the first place.

But then having these tools come in to make actual decisions, I would be reluctant to have them used there. That is what in all kinds of other contexts, people have been calling critical ai. and that can just easily get out of hand. because we don't understand very well how these systems exactly work and how it will be somewhat difficult to monitor, the effects of that.

so yes, as a support for some of the editorial checks and the like, think can be helpful. Then there remain, of course, confidentiality, and privacy issues. so one would have to find ways of dealing with that, but that seems possible. Having them being used as actual

[00:56:38] Serge: decision makers. hmm.

[00:56:40] Serge: Maybe not

So for me, there's sort of a fundamental contradiction in the idea of having an LLM be a peer reviewer. An LLM is not a scientist. It cannot, by definition be the peer of the authors of a scientific article. I do agree with Serge that there may be cases where generative AI could expedite certain parts of the editorial process.

[00:57:02] Melinda: I'm not saying that, shove it in a box and never look at it again. I do think that as we look for ways to make the refereeing process go faster. We really ought to keep that core idea that the reason we do refereeing is not because we want to generate referee reports .

[00:57:20] Melinda: It's because we are committed to the idea that knowledge should be reviewed by an expert in that field before it deserves publication in the scientific journal. the reason that we have this idea is that in the 1970s, there was a push to get scientists essentially out of reviewing at US national funding bodies like the NSF and the NIH the Nixon administration was pushing, for example, to turn over reviewing of scientific grants entirely to bureaucrats who Nixon himself would appoint.

[00:57:51] Melinda: And so scientists pushed back and they said, no, we have this system that we. Are now calling peer review. peer review was a pretty new term at the time, but we have this system that we are now calling peer review. And this is the right way for science to work. The right way for science to work is for experts in that field to be deciding what's good and what's not.

[00:58:13] Melinda: And if you don't have that, you cannot trust the decisions that come out of a grant funding, decision process or a journals publication process. And so that's really at the core of both the scientific community's belief in peer review and why it works and the broader public's understanding of what peer review is and why it works.

I would caution against too much techno enthusiasm. I do not think we are going to code our way out of the labor problem that we have with peer review. Hmm.

[00:58:40] Angela: That's kind of a disappointment.

[00:58:43] Annie: No, it is a relief to me. Angela knows I'm a real AI skeptic. It scares me. Like it actually scares me.

I just don't want to lose the humanity in it. I've said this before, like I just losing that human element. I absolutely love writing something in peer review. Been saying, I really love that you included this. Thank you so much. It's made my day to read this paper, and when I get reviews back, I'm just like, oh, it means so much to me that I know they're human out there, has read this, they're in my field.

[00:59:11] Annie: They valued it. They took the time to make it better because they wanted to help me.

[00:59:17] Angela: Overall, I completely agree. However, I would just play devil's advocate on one point. The decision making by human beings is so arbitrary sometimes.

You all probably know of this study in Israel where judges who had to, make, decisions on people's, criminality, yes or no, jail sentences. If they were giving, the decision after a meal, they were harsher.

[00:59:42] Angela: we are, very human. we're subject to the vagaries of our physiology, and then just even beyond that, just being tired or postprandial in a bad mood or whatever,

[00:59:53] Melinda: just on the question of, bias in referee reports, what I would say is that any LLM that we use to make [01:00:00] decisions about scientific papers, that's gonna have to get trained on something, it's gonna have to get trained on existing referee reports. So it's going to sort of. Eat all of the bias in all of the referee reports.

[01:00:10] Melinda: We're feeding it and it might come out in ways that we don't expect and struggle to correct for. I think you could reply with, well, we'll only train it on good referee reports. Well, who decides what a good referee report looks like? I think it's not as simple as, oh, well, the AI is never going to be hungry or grumpy or, just had a really hard drop off with its kids at kindergarten, and it will give better and more, objective decisions.

[01:00:35] Melinda: we have to train it on something and what we're training it on is the work of a whole lot of humans. And that's leaving aside the kind of copyrighted intellectual property issues that I, I think, in my opinion, AI has not resolved.

[01:00:47] Angela: Fair enough. I fully agree and I'm really mostly playing devil's advocate. Don't worry anybody. I'm not actually gonna start putting AI on your papers.

A quick follow up question to that. So, Sergey, what was the conclusion, if any, at the peer review week , about AI and peer review.

[01:01:05] Serge: well Opinions on this are still very, very mixed.

[01:01:08] Serge: I don't think there was a, specific way or a single, path. things are developing into, uh, yes, sure, the efficiency gain is very attractive for some actors within, this industry. But there's all kinds of concerns, that are also very explicitly voiced.

[01:01:27] Angela: Interesting. Yeah, it's very early in the game, I guess. for the final question, may I ask each of you, what is the right model to use? If you were living in an ideal world and you could just change culture, you could change the paradigm, you could, , disrupt,

[01:01:44] Angela: what would you put into place,

[01:01:45] Serge: Well, so I'm personally, really a fan of open peer, review reports. as we discussed before, there's a lot of potential benefits, of that. I don't see too many drawbacks I understand the point of view of the editor in this.

[01:02:01] Serge: and then I think just this. Negotiation of the role of the peers within the editorial process. I think that is really something to look at, what responsibilities do I really give the individual reviewer, in terms of this wider process. for me, a process in which editors take up quite a bit of the role so that we can have reviewers look at specific parts of a paper where their expert knowledge really comes into play.

I think that is a, a nice way of organizing the system.

[01:02:36] Angela: So maybe asking peer reviewers to do a little bit less in terms of scope. don't expect them to carry the entire manuscript, the quality of the whole manuscript.

[01:02:46] Serge: it would also allow to have reviewers look at only specific parts of a manuscript and maybe not all reviewers look at every bit and piece of it.

then you could have a specific reviewer only looking at the statistics of the paper, if that is relevant without having to have maybe too much knowledge about other parts of it or Yeah. Slicing yeah. task.

[01:03:11] Angela: Yeah. Yeah. I think Annie, we have to think about that. That means more work for editors, but you know what?

[01:03:16] Angela: Need to find more reviewers, more reviewers to do more parts. No, we slicing..

[01:03:21] Angela: Melinda, how about you?

[01:03:22] Melinda: Oh gosh. if I could wave a magic wand and change something about the way that modern refereeing works, what I would actually wave a magic wand about is, more jobs for scientists where they have time and scope to do really thoughtful peer reviews without feeling like they are sacrificing their own professional advancement to do it.

Absent that, there was a paper that I read from the 1980s when there were sort of controversies about, why peer review wasn't detecting fraud in scientific articles. And the answer is, peer review has never detected fraud. But anyway, one of the papers I read suggested that maybe to avoid this kind of hyper competitive publish or perish culture, top universities should only consider a scientist's best 4 papers when they're going up for tenure or promotion.

[01:04:11] Melinda: And that never caught on. But it strikes me as kind of a good idea that maybe one of the ways out of this sort of deluge of scientific publishing and this struggle to find reviewers would be for scientists themselves to focus on a small number of really excellent papers. But again, I don't really see any way to push for that absent, my hypothetical magic wand.

[01:04:33] Melinda: So that's probably a bit of a pipe dream, but I've always really liked that idea.

[01:04:39] Angela: I have heard of that one. And I think it's a brilliant idea. And I do think that goes to the heart of the problem, which is that we're all being pushed to do quantity and not quality. but the problem like you say, you would need a magic wand because the power in that paradigm is not with us.

[01:04:56] Angela: It's entirely outside of the publishing apparatus. It's with the institutions, right. that we work for the academic institutions , and it's difficult to exert, our will on them.

[01:05:08] Melinda: It really is. And I, I think that's one of the challenges with major peer review reform, that the system is now so embedded in so many different aspects of academia that, changing one aspect of peer review.

[01:05:20] Melinda: Usually sets off a whole line of dominoes that affect everything from the way we award funding to the way we decide who gets promoted to the way we decide who gets hired. It's just so deeply embedded now, and that doesn't mean that change is impossible, but it does mean that it is far more challenging than simply saying, oh, well you editors should do it differently.

[01:05:39] Annie: That sounds like the next podcast episode. Academia is broken.

[01:05:43] Melinda: Love it. Oh boy. You'll, you'll get a lot of volunteers for guests on that one. I'm not, I wouldn't host that one myself. Keep my head down.

[01:05:49] Angela: Is there anything you two would like our readers to know about, or to look at, before we move to end this fascinating conversation?

[01:05:58] Melinda: there's another book that I realized that I, would really like to, recommend, for people interested in the history of scientific journals.

[01:06:03] Melinda: Another book that I have really relied on is a History of Scientific Journals, by Eileen Fife, et All. It is open access from, university College of London Press. And it is a wonderful resource, that looks at over 350 years of scientific publishing at the Royal Society. So for those interested in the history of journals, I really recommend, going and downloading the manuscript for free.

[01:06:26] Melinda: You will learn anything you ever wanted to know about how scientific journals have developed over time.

[01:06:32] Annie: Awesome. I'm gonna add that to my reading list.

[01:06:35] Angela: Melinda, I have to ask you though, you're writing a book. I actually wanna get this in if it's okay. Do you have the title of your book?

[01:06:43] Melinda: Oh God, we just changed the title. so I believe the title right now is in Referees.

[01:06:48] Melinda: We Trust Question mark, how Peer Review became a mark of scientific Legitimacy, and it's going to be published open access by MIT press. I don't have a publication date yet, publication is getting close, knock on wood.

[01:07:01] Melinda: So, that'll be free to download for anyone, anywhere. And thank you to the Sloan Foundation, which, is funding the Open Access publication.

[01:07:09] Annie: Thank you so much for our guest today, professor Melinda Baldwin from University of Maryland in the US and Dr. Serge Hoback from Rabo University in the Netherlands. Thank you for listening to Communicable the CMI Comms Podcast.

[01:07:23] Annie: This episode was hosted by Angela Huttner and me Annie Joseph, editors at CMI Comms ESCMID's Open Access Journal. It was edited by Dr. Katie Hostettler and peer reviewed. Yes. We peer review our episodes by Dr. Barbora Pisova from Czechia. Theme Music, composed, and conducted by Joseph McDade. This episode will be citable with a written summary referenced by A DOI in the next eight weeks, and any literature or books we've discussed today will be found in the show notes.

[01:07:52] Annie: You can subscribe to Communicable on Spotify, apple, wherever you get your podcasts, or find it on ESCMID's website for the CMI Comms Journal. Thank you for listening and helping CMI, comms and ESCMID move the conversation in infectious diseases clinical psychology, and this time academic publishing further along.

View episode details


Subscribe

Listen to Communicable using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Pocket Casts Amazon Music
← Previous · All Episodes