So we have a very strange situation with some recent publications, one that deserves a closer look. The ones I refer to are the recent Lancet retrospective study on hydroxychloroquine use (blogged about here), one of the hospital data studies on the effect of anti-angiotensin therapies on the outcomes of coronavirus patients, and a preprint about the use of Ivermectin against the coronavirus (mentioned at the end of this blog post). All three of these rely on a large amount of multihospital patient data from a company called Surgisphere. Update: the Lancet now has an Expression of Concern out too.
Note that the link there to the latter preprint no longer works, because it has been withdrawn, for unknown reasons. The ACEi/ARB paper has had an “Expression of Concern” attached to it by the editors of NEJM. And the Lancet paper has come under fire from a number of commentators. All of this is because the source(s) of all of Surgisphere’s clinical data and the way it’s been handled are a lot more obscure than they should be, and definitely more obscure than they appeared to be at the start. It’s worth noting that these papers go both ways about the efficacy of these treatments: the Ivermectin one suggested that it was a useful treatment for coronavirus patients, while the hydroxychloroquine one suggested that that drug wasn’t.
Here’s a Twitter thread that goes into some of the concerns, from someone who’s worked with such large hospital databases before. As you’ll see from Joshua Niforatos’ posts, there are a lot of sources for such numbers, each with their own complications, degrees of detail, degrees of disclosure, and legal/regulatory issues. He believes that Surgisphere could indeed have the data that they’re referencing, but he’s puzzled about how this was done and finds their disclosure about such issues to be severely lacking. For more, here’s a post at Free Range Statistics that goes into still more concerns – at the very least, Surgisphere is an opaque company. And there are enough questions raised to make you wonder if the situation isn’t a lot worse than that. Looking at the company’s employees, at the software they claim to be using, and at many other features of the story does not inspire confidence, and that post is a lot less hopeful that the underlying data really exist. Here’s another statistics site (latest post, I believe) with plenty of problems as well. As it notes, the underlying conclusions of these papers could still be correct, but that they do not do a sufficient job of making that case. This one’s also worried that the underlying data may not even be there, which would be awful. There’s a large open letter on these issues as well, specifically asking (among other things) about how so many patients could have so much cardiovascular data available.
Update: more on the story at Science. It isn’t getting any better looking, I’ll say that.
For their part, Surgisphere has responded here, promising an independent audit of their numbers and how they were obtained. That had better take place with all due speed, and it had better be good. If the company really has access to this volume of electronic health records around the world, and can really do the sorts of retrospective analyses that they’ve published, then good for them. At the very least, they need to do a lot better when they publish such work. But otherwise. . .well, a bad study isn’t just a zero. It’s a negative number; bad medical papers can do actual harm, and in the case of coronavirus therapy recommendations that harm can be immediate. So let’s have some answers as fast as we can get them.