The underground doctors’ movement questioning the use of ventilators
by Dr. Matt Strauss  /  2 May 2020

“In the 1780s, medical authorities largely agreed: insufflation of the rectum with tobacco smoke was the best treatment for near-drowning. Therefore, the Royal Humane Society lined the banks of the river Thames with tobacco smoke enema kits and rewarded heroic members of the public who used them to ‘save’ drowning victims.

It’s easy to laugh at their efforts. With our modern insistence on evidence-based medicine, we would never significantly invest in medical infrastructure that has not been proven beneficial by a randomised control trial. Except of course we have, and we continue to do so. No area of medicine is immune to these lapses. But it is my own specialty of life-support medicine where this has recently been thrown into the sharpest relief.

As I previously reported in the Spectator, there has never been a randomised control trial to show that sedating people with severe pneumonia in order to put a breathing tube down their throat (the process known as intubation), in order to hook them up to a mechanical ventilator is lifesaving at any particular point in their illness. Neither has there been such a trial in chimps, dogs, sheep or rats. Yet it is a firmly entrenched belief that intubation and ventilation are necessary once a patient requires a high level of supplemental oxygen. Or it was.

While most Western governments were in a mad dash to manufacture ventilators for Covid-19 pneumonia in March, a burgeoning movement within the medical community was starting to question their use. This movement largely operated outside of the traditional networks of academic journals and conferences. Rather, it used Twitter, YouTube, and even podcasts. The first public statement from this clandestine movement seems to have come second-hand from the influential cardiologist and blogger, Dr John Mandrola:

The first life-support specialist I can find who spoke publicly on this point was Dr Cameron Kyle-Sidell. He had only started independent practice in a makeshift Covid-19 unit in New York City one week beforehand. His utterances had an out-of-the-mouth-of-babes quality. Only someone so fresh would both notice and baldly declare the emperor’s lack of clothes: that ventilators were possibly doing more harm than good.

Patients left to breath on their own with very low blood oxygen levels were not perishing as standard medical opinion would have predicted. Dr Kyle-Sidell then used YouTube to further voice his concerns. His first video has now been watched more than 700,000 times. This has to be a world first for one man’s rather esoteric rant about the physiology of mechanical ventilation.

Dr Kyle-Sidell was martyred for his efforts. A week after his viral YouTube rant, he told the popular emergency medicine podcast REBEL EM, that he ‘had a moral issue with the protocols, which are the standard protocols across the country,’ and had been forced to ‘step down from my ICU position’ at Maimonides hospital in New York. The brightest stars shine but briefly. His two-week stint as an intensive care physician had more impact than my own seven year career.

I first became aware of Dr Kyle-Sidell when a savvy reader linked to his YouTube video in the comment section below my ventilator piece. A friend who works in public health later texted it to me. Coincidentally, that same week, physicians in my ICU team started sharing and discussing his REBEL EM podcast in our WhatsApp group. Independently, others began discussing the same podcast in a WhatsApp group used by a large number of physicians in my city.

Another extremely influential blogger, Dr Scott Weingart, also based in New York State, posted a podcast on March 30 entitled ‘Stop Kneejerk Intubation’. He linked to Dr Kyle-Sidell’s viral video in his supplementary materials. All this appears to have had an impact. Two weeks ago, the New York State Governor’s office published data showing that their rate of intubation effectively fell off a cliff on April 4. This is peculiar as new cases of Covid did not peak in that state until April 14.”

It seems as though similar practice changes can be observed worldwide. This week, I spoke with a critical care specialist, Dr Fredrik Halgren, at one of the main hospitals in Stockholm. ‘In the beginning, we were throwing each and every patient onto the ventilator,’ he says. As that practice has shifted, the ICU remains full because ‘we are still handling the cases that we admitted at the beginning of the surge.’ Patients who would have been intubated one month ago are now staying in the emergency room or the medical ward with high-flow supplemental oxygen.

Data from the UK is hard to come by. However, on 27 March, 78 per cent of Covid patients who had gone to NHS ICUs were intubated and ventilated within 24 hours of arriving. By April 24, that number had gone down to 67 per cent. The cumulative mortality rate for NHS ICU Covid patients decreased from 52.1 per cent to 50.7 per cent over that same period. This is completely uncharted territory for modern medicine. I cannot think of a time when entrenched practice had changed in such a short period, on such a fundamental question as when to use life support, without authoritative academic papers being published on the subject.

While I am gratified to find that unevidenced dogma can be flexible in this manner, none of this allays my main concern. Neither the original practice, nor the shift, has been corroborated by randomised control trials. In the absence of experiment, my bias will remain towards a minimalist approach. But ultimately, I want my patients to get optimal care that has been scientifically validated. I also dearly hope that my colleagues and I will not be remembered like the 18th century rescuers on the Thames – blowing smoke up arses at a time of grave crisis.”

Some patients who appear not in distress have dangerously low oxygen levels
by Hannah Devlin / 3 May 2020

“It is a mystery that has left doctors questioning the basic tenets of biology: Covid-19 patients who are talking and apparently not in distress, but who have oxygen levels low enough to typically cause unconsciousness or even death. The phenomenon, known by some as “happy hypoxia” (some prefer the term “silent”) is raising questions about exactly how the virus attacks the lungs and whether there could be more effective ways of treating such patients.

A healthy person would be expected to have an oxygen saturation of at least 95%. But doctors are reporting patients attending A&E with oxygen percentage levels in the 80s or 70s, with some drastic cases below 50%. “It’s intriguing to see so many people coming in, quite how hypoxic they are,” said Dr Jonathan Bannard-Smith, a consultant in critical care and anaesthesia at Manchester Royal Infirmary.

“We’re seeing oxygen saturations that are very low and they’re unaware of that. We wouldn’t usually see this phenomenon in influenza or community-acquired pneumonia. It’s very much more profound and an example of very abnormal physiology going on before our eyes.” Dr Mike Charlesworth, an anaesthetist at Wythenshawe hospital in Manchester, said that while other lung conditions could cause severe hypoxia, these patients would normally appear extremely ill.

“With pneumonia or a pulmonary embolism they wouldn’t be sat up in bed talking to you,” he said. “We just don’t understand it. We don’t know if it’s causing organ damage that we’re not able to detect. We don’t understand if the body’s compensating.” Charlesworth had a personal experience of the issue while suffering from Covid-19 in March.

After becoming unwell with a cough and fever, he spent 48 hours in bed, during which there were signs he was hypoxic, he said. “I was sending very strange messages on my phone. I was essentially delirious. Looking back I probably should’ve come into hospital. I’m pretty sure my oxygen levels were low. My wife commented that my lips were very dusky. But I was probably hypoxic and my brain probably wasn’t working very well.” He recovered after a few days in bed, but he and others are conscious that not all cases have positive outcomes.

An anaesthetist at a London hospital, who spoke anonymously, recalled one patient who attended A&E saying she felt cold. “When we put the stats probe on her, her saturation was 30% on air,” he said. “We obviously thought that was wrong, as usually patients are likely to have hypoxic cardiac arrests.”

But when a blood sample was taken, her blood was very dark and had oxygen levels equivalent to those seen in people acclimatised to high altitudes. The patient was placed on a ventilator and survived for about a week before dying. “I have had a few patients like this,” the doctor said. “Sadly, their outcomes tend to be bad in my experience.”

Conventional medical wisdom is that as oxygen supplies fall, the heart, brain and other vital organs are placed at risk – and the effect is thought to be cumulative. Typically patients would lose consciousness below an oxygen saturation of 75%. However, it is not the fall in oxygen levels itself that leaves people feeling breathless. Instead, the body senses the rising levels of carbon dioxide that typically occur simultaneously as the lungs are unable to clear gas as efficiently.

But in some Covid-19 patients, this response does not appear to be kicking in. “I don’t think any of us expect that what we’re seeing can be explained by one process,” said Bannard-Smith. Swelling and inflammation in the lungs is likely to make it difficult for oxygen to enter the bloodstream.

There is also emerging evidence that Covid-19 can cause blood clotting. The vessels in the lungs that collect oxygen and transfer it into the wider bloodstream are so tiny they can become blocked with the smallest of clots.

Several clinical trials are looking at whether blood thinners could prevent or treat complications of Covid-19, including respiratory problems and low blood oxygen. Some have suggested that, since people are often oblivious to falling oxygen levels, those with Covid-19 symptoms or a positive test result should be given pulse oximeters, a simple device that clips on to the finger and can be used to detect oxygen levels at home.

However, as yet there is no evidence that early detection of hypoxia would help avoid severe outcomes and Charlesworth said the practicalities would be difficult. “Transportation of the devices would put more people on the road,” he said. “Then there are issues around people buying them on the internet and whether they [have proper safety certificates] … If you’re at the point of needing your oxygen levels monitored that’s the time to go to hospital.”

A Pandemic Moves Peer Review to Twitter
by Justin Fox / May 5, 2020

“Last June, the Cold Spring Harbor Laboratory on Long Island, Yale University and The BMJ (formerly the British Medical Journal) started a new “preprint server” for medical research called medRxiv. Preprint is one term of art for an academic paper that hasn’t been peer reviewed or published yet. Working paper is another. They’ve been distributed at meetings and seminars as long as anyone can remember. In 1991, physicists began sharing theirs on the internet on a server that came to be called arXiv (pronounced “archive”). Mathematicians, astronomers, economists and scholars in a few other disciplines soon followed suit — some on arXiv, some on other sites.

Medical researchers did not. For one thing, peer-reviewed biomedical journals publish research findings faster than those in some other fields (average time to publication is half what it is in business and economics), so the need for a speedy alternative was less pronounced. For another, medical research often involves questions of life or death that presumably deserve more pre-publication scrutiny than, say, a theoretical physics paper. And given that publishing in prestigious journals is key to career advancement, and prestigious medical journals have long made a big deal about having exclusives on new research results, researchers had legitimate worries that releasing results earlier could hurt them.

Things started off unsurprisingly slowly for medRxiv last summer, and there wasn’t all that much sign of an acceleration over the course of 2019. Then a new coronavirus began infecting people in Wuhan, China, and, well, you can probably guess what happened next. Page views to the medRxiv site are now averaging 15 million a month, up from 1 million before the pandemic. Something significant has changed in medical research.

Many of the coronavirus-related papers being posted on medRxiv are rushed and flawed, and some are terrible. But a lot report serious research findings, some of which will eventually find their way into prestigious journals, which have been softening their stance on previously released research. (“We encourage posting to preprint servers as a way to share information immediately,” emails Jennifer Zeis, director of communications at the New England Journal of Medicine.) In the meantime, the research is out there, being commented on and followed up on by other scientists, and reported on in the news media. The journals, which normally keep their content behind steep paywalls, are also offering coronavirus articles outside of it.

New efforts to sort through the resulting bounty of available research are emerging, from a group of Johns Hopkins University scholars sifting manually through new Covid-19 papers to a 59,000-article machine-readable data set, requested by the White House Office of Science and Technology Policy and enabled by an assortment of tech corporations and academic and philanthropic organizations, that is meant to be mined for insights using artificial intelligence and other such means.

This is the future for scientific communication that has been predicted since the spread of the internet began to enable it in the early 1990s (and to some extent long before then), yet proved slow and fitful in its arrival. It involves more or less open access to scientific research and data, and a more-open review process with a much wider range of potential peers than the peer review offered by journals. For its most enthusiastic boosters, it is also an opportunity to break through disciplinary barriers, broaden and improve the standards for research success and generally just make science work better. To skeptics, it means abandoning high standards and a viable economic model for research publishing in favor of a chaotic, uncertain new approach.

I’m mostly on the side of the boosters here, but have learned during five years of writing on and off about academic publishing that the existing way of doing things is quite well entrenched, and that would-be innovators often misunderstand the challenges involved in displacing or replacing it. This moment does feel different, though. “It’s going to really be fascinating to see if this will be the tipping point,” says Heather Joseph, executive director of the Scholarly Publishing and Academic Resources Coalition, an organization of academic libraries that has been pushing hard for a more open research infrastructure. “Because of the way distribution of scientific information is being piloted in a new way in the Covid crisis, my hope is that this will spill over to other areas in subsequent years,” adds Ijad Madisch, a German virologist who is founder and chief executive of ResearchGate, a social network for researchers that has seen a surge in activity and collaboration around Covid-19. “It scares me that we as scientists might just go back to doing things as we did before.”

At medRxiv, co-founder Richard Sever is pretty sure that medical researchers won’t be turning away from preprints after the crisis has passed. “Once a field starts doing this, they don’t stop,” he says. Sever is assistant director of the Cold Spring Harbor Laboratory Press and also co-founder of bioRxiv, medRxiv’s sister preprint server, which he has watched catch on with one biology subfield after another (first genomics, then cell biology, most recently neuroscience) since its founding in 2013. bioRxiv has also seen a recent surge in submissions and readership, albeit less dramatic than medRxiv’s given that it was starting from a much larger base.

A big part of the attraction of preprints for researchers studying a fast-moving phenomenon such as Covid-19 is that they rather than journal editors control the timing of the release of new research results. “It’s a scoop-protection device,” Sever says. Other major destinations for coronavirus-related preprints include Research and the Center for Open Science’s OSF Preprints. Overall, there were at least 8,830 biomedical preprints posted in March, up 142% from March 2019, according to data compiled by Jessica Polka and Naomi Penfold of the nonprofit Accelerating Science and Publication in Biology (aka ASAPbio).

bioRxiv and medRxiv don’t accept every submission. Pure opinion pieces aren’t allowed — there has to be actual research involved. Beyond that, says Sever, “if something goes up on bioRxiv it just means we don’t think it’s dangerous and it’s probably not crazy nonsense,” while for medRxiv there’s heightened scrutiny of potentially dangerous claims plus a checklist of conditions that any clinical research paper must satisfy. Both servers also recently began declining papers that pointed to treatments for the coronavirus based purely on computer modeling. “We decided that somewhere on this spectrum was a point where peer review was needed,” Sever says.

This came as something of a shock to Albert-László Barabási, a prominent network scientist at Northeastern University in Boston who had a paper on a “Network Medicine Framework for Identifying Drug Repurposing Opportunities” rejected last month by bioRxiv. He eventually just posted it on arXiv instead, but wondered on Twitter if it might make more sense for bioRxiv to create a scientists-only list for potentially sensitive Covid-19 research. ResearchGate’s Madisch also likes the idea of a setup “where the research community can give feedback before it’s released to the public,” but Sever said he worries that such an approach would just end up favoring an in-crowd of scientists at top universities.

So for now, at least, it’s all happening in public. One oft-heard complaint is that this allows unvetted research to be distributed to lay readers — as with the paper posted on bioRxiv in late January that found an “uncanny similarity” between several genetic sequences in the new coronavirus and those in the human immunodeficiency virus that causes AIDS, findings that as BuzzFeed News science reporter Stephanie M. Lee described in an account of the paper’s rise and fall were immediately latched onto online as evidence that the virus was man-made.

After other researchers tweeted  criticism that the findings were in fact probably the product of random chance, though, the authors retracted the paper. Clearly, preprint servers can allow bad information to be presented to the public. But research findings published in peer-reviewed journals have to be retracted sometimes, too, and many more turn out to be wrong in the sense that they can’t be replicated by subsequent studies. As Stanford Medical School’s John Ioaniddis argued in a 2005 paper so famous that it has its own Wikipedia page, “most published research findings are false.”

That brings us to perhaps the most vigorously debated medRxiv paper so far, “COVID-19 Antibody Seroprevalence in Santa Clara County, California,” posted on the site April 17 by a multidisciplinary team of authors that included Ioaniddis. The paper reported the results of testing for coronavirus antibodies among 3,300 county residents recruited by Facebook ads, 1.5% of whom tested positive. The authors then made a number of statistical adjustments that upped their estimate of the percentage of county residents who had been infected with the coronavirus to 2.49% to 4.16%, which was 50 to 80 times the number of confirmed cases at the time and implied a Covid-19 fatality rate of just 0.12 to 0.2% — not all that different from the rates usually reported for seasonal influenza (although the actual ratio of influenza fatalities to infections is probably lower).

Some infectious disease experts, whose estimates of Covid-19’s infection fatality rate have mostly centered on a range of about 0.5% to 1%, took to Twitter to offer skeptical but reasonably polite critiques. (Disclosure: so did I.) But physicist-turned-virus-researcher Richard Neher of the University of Basel and statistics professors Will Fithian of University of California at Berkeley and Andrew Gelman of Columbia University all argued that the statistical adjustments in the paper were outright wrong, with Gelman concluding on his blog that the authors of the paper “owe us all an apology. We wasted time and effort discussing this paper whose main selling point was some numbers that were essentially the product of a statistical error.”

No such apology has been forthcoming, but the authors did on Thursday replace the paper on medRxiv with a revised version that changed their estimate of the percentage of county residents infected with the virus to a range of 1.3% to 4.7%, and generally did much more to show their work and stress the uncertainty inherent in their findings. They also expressed appreciation for the many criticisms the paper had received, concluding that, “We feel that our experience offers a great example on how preprints can be an excellent way of providing massive crowdsourcing of helpful comments and constructive feedback by the wider scientific community in real time for timely and important issues.”

Other scientists weren’t so sure the rough-and-tumble — and public — discussion around the paper was such a good thing. Two prominent medical school professors wrote an opinion piece for the science news site Stat decrying some of the criticisms as “ad hominem,” while Neeraj Sood of the University of Southern California, lead author of a related study in Los Angeles County that hasn’t been released as a preprint although preliminary results have been shared, told BuzzFeed’s Lee that “I don’t want ‘crowd peer review’ or whatever you want to call it. It’s just too burdensome and I’d rather have a more formal peer review process.”

But is a more formal peer review really better? “To me there’s no doubt that more eyes on something mean that ultimately a better judgment can be made,” says medRxiv’s Sever, a molecular biologist with long experience in editing scientific journals. “Journals send articles to two or three people, ask for comments in two weeks, and the reviewers never do it on time and you have to pester them. The chance you get a representative sample is not that great. Wouldn’t it be great if there were a lot of other discussions that had already happened that journals could incorporate in their evaluation?”

This implies a world in which open research-distribution channels and peer-reviewed journals exist side by side, playing different roles — which is how things have worked for quite a while in some academic disciplines. “In the old days, journals were viewed as a means of disseminating ideas,” Yale economist Pinelopi Goldberg, then the editor in chief of the American Economic Review, said at a conference I attended four years ago. “The most important function that journals have these days is the certification of quality.” Or as the saying supposedly goes (according to Sever), “Nobody ever got a job by putting something on arXiv.”

Academic journal publishing is dominated by a handful of for-profit publishers — the largest is Elsevier, a subsidiary of London-based RELX Plc — who sell digital access to their journals in large bundles to university libraries. Medical publishing is a bit different, with many leading journals controlled by nonprofit medical societies and distributed widely among practitioners, but they too rely heavily on subscription paywalls. Keeping scientific research that is funded by philanthropies, universities and government agencies behind such paywalls has been unpopular for a while, and has been coming under increasing pressure from those who pay for the research, especially in the European Union.

Publishers and universities have been exploring new “read and publish” contracts in which universities pay both for access to the journals and paywall-free publication of articles by their faculty, but as the consequences of the coronavirus hammer budgets, sharp cutbacks in library spending on journals seem inevitable. “Those kinds of reckonings are coming very quickly,” says Joseph.

Then again, these reckonings could endanger newer forms of scientific communication as well. Although preprint servers don’t cost nearly as much to run as academic journals — arXiv has expenses of about $2.7 million a year, while the American Association for the Advancement of Science, publisher of the interdisciplinary journal Science, reports journal-and-publishing-related expenses for 2018 of more than $45 million — most have to rely on the generosity of philanthropists and universities to pay the bills, and will struggle to make ends meet in a time of higher-education cutbacks.

As scientific-publishing veteran Kent Anderson wrote in his subscription newsletter last week: “Open science, which is essentially a basket of new expenses with no established funding models, isn’t going to suddenly receive millions or billions from the EU or some consortia of universities. So, hit the “pause” button here.”

One can imagine the pause button being hit for many aspects of scientific research in the coming months and years. There surely won’t be a shortage of funding for those who study viruses, pandemics and the like, but many other fields could face tough times. In some disciplines this may push scholars toward more open, collaborative ways of doing research and communicating it; in others it may reduce experimentation and communication. The scientific community is not a monolith. But on the whole, it does seem to be moving in a new direction.”