Discussion about this post

User's avatar
An Ominous's avatar

This study definitely lox credibility, seems fishy to me, and doesn't have a ring of truth about it.

Expand full comment
Radovan Zentner's avatar

These studies have two major flaws:

1. can you trust the questionnaire? Just imagine, when study is done for a certain substance, that substance is measured in accredited labs, lots and lots of standards are applied, accreditation measures checked. And here? What scrutiny was applied to the study participants? To be honest, you are lucky that they participate with honesty at their side. But honesty and accuracy can still be miles away.

2. confounding variables: in this study, it was the medical community, so even if the conclusion is true, it is valid only for that community. But this niche population in the study is just the tip of the iceberg regarding confounding variables in such studies. There are so many confounding variables in such studies, with their noise signals just adding up. And it works both ways: if you take them into account, they can lead to overadjustment, if you do not take them into account ... well,... you did not take them into account! Even a well-conducted study can produce misleading results if overadjustment introduces excessive noise, masks true effects, or amplifies statistical instability. Sensitivity analyses, careful confounder selection, and alternative causal inference methods (e.g., instrumental variables, directed acyclic graphs) can help mitigate these risks, but who does that?

Finally, a small shopping list of issues with confounding variables that make most epidemiological studies laughable (but unfortunately still taken seriously in popular media):

1. Overadjustment - when a study controls for variables that are intermediates in the causal pathway (leading to attenuation or reversal of a true effect) - a setup in which the (intermediate) variable is correlated with the studied hypothetical cause ("exposure" in epidemiology) or effect ("outcome" in epidemiology) but not true confounder.

For example, if an exposure increases both blood pressure and cardiovascular disease (CVD) severity, and the study adjusts for blood pressure when analyzing cardiovascular risk, it may mask or even inverse the exposures' true effect. Thus we could get that salt is preventing CVD (or of course that it causes it), depending on tinkering with confounders.

2. Residual Confounding and Measurement Noise - If confounders are not measured accurately or completely, statistical adjustments may still leave residual confounding. In studies with high variability in confounders, noise can overwhelm the signal, leading to wide confidence intervals, spurious associations, or null results. But for wide confidence intervals, that would at least indicate that there is a problem with the study.

3. Model Instability and Multicollinearity - Including too many correlated confounders can cause statistical instability (e.g., variance inflation, collinearity), making it difficult to isolate the independent effect of the exposure. This can lead to wide confidence intervals, false negatives (Type II error) and incorrect direction of effect estimates

4. Data-Dredging and Multiple Comparisons - When adjusting for many factors, the risk of spurious findings increases, especially if many subgroup analyses are performed. Even with proper statistical corrections, extreme variability in data can amplify false signals and reverse the result.

5. Effect Modification and Heterogeneous Populations - If intervention and control groups differ drastically in confounding factors, standard adjustment methods (e.g., regression, propensity scores) might fail to balance them adequately. Subgroup interactions might distort findings, making generalization difficult.

Expand full comment
19 more comments...

No posts