I should have paid more attention in medical school.
If I had, I might have remembered enough about basic pathophysiology to know why everyone was suddenly pulling their patients off of lisinopril.
For those of you who need a quick primer: When the pressure in the renal artery drops, the kidney secretes renin. Working together, renin and a second enzyme—angiotensin converting enzyme 2 (ACE2)—produce angiotensin, which gives your blood pressure a boost.
ACE2 is also the traitor that holds the door open for SARS-CoV-2. By binding ACE2, SARS-Co-V-2 gains entry to cells in the lungs and gut, leading to the syndrome we recognize as COVID-19.1
If ACE2 is a traitor, then many of us may have been complicit in its crimes. Some antihypertensives, such as lisinopril, are believed to increase production of ACE2, potentially making it easier for SARS-CoV-2 to breech the barricades.
There’s no proof of this, of course, but the theory was alluring enough that it led to a global back and forth among multiple, high-powered institutions regarding which would be the greater sin: taking patients off proven therapies or ignoring a plausible hypothesis until it was too late.2
On May 1, 2020, we had an answer. Mandeep R. Mehra, MD, PhD, medical director of the Heart and Vascular Center at Brigham and Women’s Hospital, Boston, published an analysis in The New England Journal of Medicine of 8,910 patients admitted to 169 hospitals on three continents with COVID-19 between December 2019 and March 2020.3
Mehra et al. identified multiple risk factors for COVID-19-associated mortality, including older age, cardiac disease, chronic obstructive pulmonary disease and smoking. Unlike other groups, however, the authors did not identify hypertension as a significant risk factor. Even more reassuringly, they concluded there was no evidence linking the use of ACE inhibitors or angiotensin 2 receptor blockers with an increased risk of mortality.
One month later, 174 investigators from every continent (except Antarctica) sent an open letter to The New England Journal of Medicine titled “Expression of Concern Regarding Data Integrity and Results.” In it, they noted:4
“It is difficult to reconcile the UK data … with publicly available government data. Mehra et al. report electronic patient data from 706 patients hospitalized with PCR confirmed COVID-19 in just 7 of the UK’s 1,257 NHS hospitals. A high proportion of patients hospitalized in the UK on March 15th were in London, and yet no London borough, let alone hospital, had more than 100 PCR positive confirmed cases by this date. The numbers from Turkey also appear incorrect.”
On June 18, 2020, Eric J. Rubin, MD, PhD, editor in chief of The New England Journal of Medicine published an “Expression of Concern,” noting he had “asked the authors to provide evidence that the data are reliable.”5
On June 25, 2020, the paper was retracted.6
Whoa.
As the son of physicians, The New England Journal of Medicine was sacrosanct in my house. I learned to revere the red-on-white seal long before I knew what it meant. A retraction by The New England Journal of Medicine, the standard bearer for the medical publication industry, was big news. It was like Moses retracting one of the Ten Commandments.
The retraction was only two sentences long, but its reverberations were felt throughout the scientific community. Another major paper, based on the same dataset, was simultaneously retracted by The Lancet.7 The New York Times declared, “The pandemic claims new victims: prestigious medical journals. Two major study retractions in one month have left researchers wondering if the peer review process is broken.”8 But what, precisely, is peer review, and where did it go wrong in this case?
Peer Review 101
The Rheumatologist uses peer review, after a fashion. We just use a very small group of peers. Articles are not typically sent out to an international panel of experts for their review and comment. The associate editors and I read every pitch, every article, every case report, and together, we decide on their merits. For us and for what we do, the system works.
On the other hand, you could easily imagine how the system could break down. I’m not an expert on everything. I wouldn’t be my first choice to review a manuscript on lupus nephritis or the use of CAR-T cells as a treatment for autoimmunity.
Such articles would, ideally, be refereed by other investigators who could say authoritatively whether the article contributed meaningfully to the literature. Someone who had the same level of expertise as the article’s authors. Someone who the authors would consider, you know, a peer.
For something considered so fundamental, peer review has a remarkably short history in academic publishing. The concept of peer review dates back to the 1600s, but it did not become commonplace until much later. Nature started using peer review in 1967. The Lancet did not follow suit until 1976.9
This shift was not due to a sudden change in publishing mores. Rather, the advent of peer review owes most to a single event: In 1959, the Xerox company brought the first commercial photocopier to market. The Xerox 914 weighed 650 lbs. and was prone to spontaneous combustion.10 Despite these flaws, Xerox Corp. couldn’t make the machines fast enough; the company sold thousands of these imperfect machines and simultaneously struck a blow for the democratization of information. Before this technology became commonplace, making a copy of an article was laborious, expensive or both. After the emergence of Xeroxing, anyone could get a copy of an article, with the push of a button.
Now, peer review is commonplace and typically takes place in two phases: When submitted to a journal, an article first goes through a desk evaluation: An editor reviews the article and quickly decides whether the article is worth further consideration. At this stage, articles may be rejected for a host of reasons, mainly technical; the subject of the article may not fall under the journal’s aims and scope, for example, or it may not be formatted correctly.
If an article survives desk evaluation, it moves on to a blind review, in which the manuscript is sent to referees who are experts on the subject matter. The referees make comments, and then make an explicit recommendation to accept or reject the manuscript, or for the authors to make revisions to the manuscript and resubmit for reconsideration.11
Peer Review, Reviewed
Winston Churchill said, “Democracy is the worst form of government, except for all those other forms that have been tried. …”12 In a nutshell, that summarizes how most of us feel about peer review. We’re not happy with it, but we would be less happy with the alternatives.
Richard Smith, the former editor of the British Medical Journal, identified the following flaws with peer review:13
- Lack of agreement among referees: Several studies indicate peer reviewers often disagree on the value of an article;
- Identification of peers: Some people are truly peerless; it can be challenging to identify a referee who truly has expertise in all of the areas represented by the authors;
- Introduction of delays: The peer review process introduces a bottleneck, which delays publication;
- Bias: Referees may be unfairly biased toward or against specific investigators or institutions. In one famous example, Peters and Ceci took 13 articles published by researchers from prominent institutions, changed the institution names, and resubmitted the articles to the same journals. Nine of the 13 articles were rejected as fatally flawed.14
- Bias against innovation: Truly novel hypotheses often fare poorly in the peer review process, which is more likely to reward incremental advances that reflect current models.
How can we improve the peer review process? One possibility is to consider how the process is blinded. Most peer review is single-blind, meaning the referees know the authors’ identities, but the authors never know who reviewed their work. Double-blind reviews, in which the authors’ identities are hidden from the reviewers, may prevent the referee from being dazzled by an author’s impressive credentials.
Although this seems like a sensible innovation, in practice, it may not improve the quality of peer review. Amy Justice, MD, PhD, et al. conducted a study in which articles submitted to Annals of Emergency Medicine, Annals of Internal Medicine, JAMA, Obstetrics & Gynecology, and Ophthalmology underwent both single-blind and double-blind review. Each article was assigned to two reviewers; one of the two reviewers was not allowed to know the identity of the authors.
Justice et al. found that double-blinding made no difference.15 Of the 118 articles reviewed, double-blinding had no discernable impact on the assessment of the article. Also, the blinding didn’t work. For one-third of the articles, the reviewers were able to correctly guess the identity of the author; this rate increased when an author was well known and would presumably increase further when the exact subject matter is especially abstruse.
Open peer review is another strategy that has been adopted by several journals over the past few years. Central to this strategy is the use of open reports, in which the referees’ comments are published along with the article, like a Letter to the Editor.
The open peer review strategy may potentially benefit the reader, the reviewer and the reviewed. By seeing how the criticism influenced the final product, the reader gains a deeper understanding of the science behind the paper. Knowing the review will be made public may encourage reviewers to write better reviews. Finally, signed reviews would be published and citable, which could serve as added incentive to serve as a peer reviewer.
That last point is key: Every year, investigators volunteer 68.5 million hours of their time working on peer review for journals. At the end of the day, peer review is fueled by altruism, and the truth is that the system is running out of gas. The Global State of Peer Review report, which surveyed more than 11,000 investigators worldwide, identified reviewer fatigue as a growing problem in the peer review process. In 2013, 1.9 invitations had to be issued to identify one peer reviewer; in 2017, that number increased to 2.4.16
Little wonder. I know that every article I review takes time away from signing notes, putting together research proposals, teaching, and other activities for which I am paid a salary. One of the great ironies of the peer review system is that the investigators who are most qualified to function as a peer are often the least likely to participate as a peer reviewer, because of other constraints on their time.
Moreover, you get what you pay for. It is difficult for most of us to prioritize a service we are providing gratis. Paying reviewers for their time might go a long way toward improving the quality of peer reviews.
Post-Publication Peer Review
It’s not clear to me that any of these issues allowed the Mehra et al. article to slip through the system. The New England Journal of Medicine has no problem identifying qualified peer reviewers. Additionally, it has a large internal staff that vets each article, so conventional peer review serves as only one part of the overall review process.
The hard truth is that peer review was not designed to detect fraud. When I provide peer review for an article, I am not typically repeating the author’s calculations to see if they came up with the right P value. I am fully prepared to give the authors the benefit of the doubt and assume all of their tables and graphs are supported by actual data.
In the case of the Mehra et al. article, it’s almost a fluke the fraud was detected. The dataset was supplied by Surgisphere, founded by vascular surgeon Sapan Desai. This tiny company no one had ever heard of claimed it had a database of records from approximately 700 hospitals on six continents.17 The same dataset was used in a study published in The Lancet on the impact of hydroxychloroquine on COVID-19 outcomes.18 The controversial nature of the topic led to greater-than-normal levels of scrutiny of the paper, which led to the identification of the anomalies in the dataset, which led to the open letter issued to The New England Journal of Medicine. If the Lancet article were not published, it seems quite possible the Mehra et al. article would have never been challenged.
These events highlight the importance of post-publication peer review. Our current system of peer review was birthed by the photocopier. The advent of online publication, however, means we are no longer constrained by the need to send physical copies to individual reviewers. The open letter issued to The New England Journal of Medicine is a particularly dramatic example of post-publication peer review, but a similar process takes place every day on online platforms, such as Twitter, on which articles are critiqued almost as fast as they can be published. Learning how to harness this process and make it a formal part of article review is the next challenge for academic publishing.
Peer review may not be broken, but by broadening our definitions of peer and review, we can make it work even better.
Philip Seo, MD, MHS, is an associate professor of medicine at the Johns Hopkins University School of Medicine, Baltimore. He is director of both the Johns Hopkins Vasculitis Center and the Johns Hopkins Rheumatology Fellowship Program.
References
- Wiersinga WJ, Rhodes A, Cheng AC et al. Pathophysiology, transmission, diagnosis, and treatment of Coronavirus Disease 2019 (COVID-19): A review. JAMA. 2020 Jul 10.
- COVID-19 and the use of angiotensin-converting enzyme inhibitors and receptor blockers: Scientific Brief. World Health Organization. 2020 May 7.
- Mehra MR, Desai SS, Kuy S, et al. Cardiovascular disease, drug therapy, and mortality in COVID-19. N Engl J Med. 2020 Jun 18;382:e102.
- Watson JA, Meral R, Price R, et al. An open letter to Mehra et al and The New England Journal of Medicine. Zenodo. 2020 Jun 2.
- Rubin EJ. Expression of concern: Mehra MR et al. Cardiovascular disease, drug therapy, and mortality in COVID-19. N Engl J Med. doi: 10.1056/NEJMoa2007621. N Engl J Med. 2020 Jun 18;382:2464.
- Mehra MR, Desai SS, Kuy S, et al. Retraction: Cardiovascular disease, drug therapy, and mortality in COVID-19. N Engl J Med. DOI: 10.1056/NEJMoa2007621. N Engl J Med. 2020;382:2582.
- Mehra MR, Rushitzka F, Patel AN. Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: A multinational registry analysis. Lancet. 2020 Jun 13;395(10240):P1820.
- Rabin RC. The pandemic claims new victims: Prestigious medical journals. The New York Times. 2020 Jun 14.
- Shema H. The birth of modern peer review. Scientific American. 2014 Apr 19.
- O’Connell K. Happy birthday, copy machine! Happy birthday, copy machine! National Public Radio (KQED Morning Edition). 2013 Oct 23.
- Spicer A, Roulet T. What is peer review. EarthSky. 2018 May 17.
- Winston Churchill, speech, House of Commons, November 11, 1947. Winston S. Churchill: His Complete Speeches, 1897–1963, ed. Robert Rhodes James, vol. 7, p. 7566 (1974).
- Smith R. Problems with peer review and alternatives. Br Med J (Clin Res Ed). 1988 Mar 12;296(6624):774–777.
- Peters DP, Ceci SJ. Peer review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences. 1982 Jun;187–195.
- Justice AC, Cho MK, Winker MA, et al. Does masking author identity improve peer review quality? A randomized controlled trial. JAMA. 1998 Jul 15;280(3):240–242.
- Publons. 2018 Global State of Peer Review. Web of Science Group.
- Piller C. Who’s to blame? These three scientists are at the heart of the Surgisphere COVID-19 scandal. Science. 2020 Jun 8.
- Mehra MR, Desai SS, Ruschitzka F, et al. Retracted: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: A multinational registry analysis. Lancet. 2020 May 22;S0140-6736(20):31180-6.