Originally posted by TheLurch
View Post
To be clear, while some of them are being rejected due to methodological problems, the primary issue is that most studies are small, and done in a way that they aren't equivalent, and so their patient populations can't be combined and treated as a single, larger study. The clearest way to see this is to go to this other page on the site, and scroll down past the blue box and look at the list of studies. For one, it's clear that the endpoints are all over the place - some measure symptoms, others death, others hospitalization. The dose used varies by over an order of magnitude. Not mentioned there is the fact that a lot of them use ivermectin in association with a varying cocktail of other stuff (examples include zinc, vitamin C, vitamin D, azithromycin, etc.), so it's impossible to determine what, if anything, is having an effect.
As a result of this methodological chaos, you can't combine any of this into a valid meta-analysis. So you are left with the individual studies, which typically look at less than 50 people. And with so few people, it's common for random chance to completely throw off the statistics.
Think of rolling a pair of dice 50 times. On average, you'd expect to roll two sixes about 1.5x times. But it wouldn't be shocking if you rolled that zero times, or if you rolled it four times. If that sort of randomness biases things towards either the experimental or control groups, you could get something that looked good, but was actually meaningless randomness.
(And, since there's a bias towards reporting positive results, you'd expect the published record to amplify one half of that randomness.)
That's where we're at with ivermectin right now. I understand that a large, randomized, and blinded trial is currently in progress, which could finally bring some clarity. But the data we have now just isn't good enough to draw any conclusions.
As a result of this methodological chaos, you can't combine any of this into a valid meta-analysis. So you are left with the individual studies, which typically look at less than 50 people. And with so few people, it's common for random chance to completely throw off the statistics.
Think of rolling a pair of dice 50 times. On average, you'd expect to roll two sixes about 1.5x times. But it wouldn't be shocking if you rolled that zero times, or if you rolled it four times. If that sort of randomness biases things towards either the experimental or control groups, you could get something that looked good, but was actually meaningless randomness.
(And, since there's a bias towards reporting positive results, you'd expect the published record to amplify one half of that randomness.)
That's where we're at with ivermectin right now. I understand that a large, randomized, and blinded trial is currently in progress, which could finally bring some clarity. But the data we have now just isn't good enough to draw any conclusions.
Comment