(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



KosAbility: Observational Studies - Do Not Be Misled! [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-07-27

Observational studies determine associations. They do not prove causation. People can be misled into thinking there is causation when there is not. Even medical research professionals have made this mistake.

This is part one of a two-part series. In part two we will examine an observational study worth paying attention to. Here we look at cases where these studies find associations that are not causation. Some ways this can happen:

Reverse causation - the putative cause is actually the effect of the undesired condition Confounding - The actual cause is associated with the putative cause. A simple example of this would be an association of ashtrays with lung cancer. A more detailed real example to follow. Abuse the data - Someone determined to find a particular cause guilty can keep testing until they find an adverse association. Not a valid procedure. I explain why below.

Why do observational studies?

Interventional studies prove causation, by having some participants receive placebo indistinguishable from an actual treatment, and comparing results. However, it is unethical to subject study participants to exposures suspected to cause harm, even if others are doing it voluntarily. In such cases an observational study is necessary - monitor the results of the exposure, and compare to a similar unexposed population. To avoid the kinds of problems cited above, this must be supplemented with a good animal model proving causation. A good animal model means realistic dosages - too much of anything, even water, can be adverse.

Another reason for doing an observational study is to check on long-term effects. Running an interventional study of 20 years’ duration is not practical. Instead, one can check on medical records, or use the data from long-term running observational studies like the Framingham study. If the study is of adverse effects of a disease treatment, it is also necessary to check the effect of alternative treatments. If the alternative treatment does not cause the adverse effect, that eliminates confounding due to the effect of the disease.

Reverse causation example

It is well established that Parkinson's is associated with low uric acid levels. Some investigators who were persuaded that this was a cause of Parkinson's embarked upon a large phase 3 trial, raising uric acid levels in Parkinson's patients. It was halted early for futility. The treatment caused kidney stones in some patients and resulted in an increasing rate of Parkinson's progression .

What went wrong?

For starters, uric acid is a waste product, and high levels are associated, and in some cases proven, to be a cause of cardiovascular disease, stroke, kidney disease, and gout.

In the matter of Parkinson's, levodopa is the mainstay treatment because it crosses the blood-brain barrier, whereupon it is converted into the missing dopamine. It turns out that protein competes with levodopa for absorption. The worse the Parkinson's, the more frequently patients need levodopa. They correspondingly reduce their protein intake. Less protein means less uric acid. Thus we have reverse causation - Parkinson's results in lower uric acid in accord with the severity of the condition, for those patients who take levodopa. I wrote about this in more detail here, with links to the medical literature: The Uric Acid Failure: Lessons Learned

Confounding Example

A while back we had a pair of studies on cardiovascular disease. One study, in the US, found an association between egg consumption and increased cardiovascular disease. Another study, in China, found egg consumption was associated with less cardiovascular disease.

What happened?

It turns out that the true villains are advanced glycation endproducts. These are vascular toxins that result from high temperature cooking of fatty protein, crispy bacon being one example. In the US, bacon consumption is highly correlated with egg consumption. In China bacon is a rarity. A different US trial that did look at bacon consumption exonerated the eggs. So this was a case of confounding. I wrote about this in detail, with citations to the medical literature, here: A Tale Of Two Studies Leads To A Deeper Understanding Of Cardiovascular Disease

Abuse the data

(If your eyes glaze over here it is okay to skip on down to the summary)

A test of association, or causation, is usually regarded as valid if the probability of a false positive is less than 5%. This is the p<.05 standard. Suppose we run two different tests of whether a putative cause is associated with different adverse effects. If we get at least one positive result and each test was run according to the p<.05 standard, are we entitled to claim p<.05 for the overall result? No!

If each test has a 5% probability of yielding a false positive, the probability that at least one of the two tests yields a false positive is about 10%. It is slightly less because the case of two false positives counts only once and not twice. (The exact probability of at least one false positive in this case is 9.75%).

So as we keep testing, the likelihood of finding a false positive keeps rising. Eventually we'll get a positive result, almost certainly false. In such cases the reported p value is typically close to 0.05, and the increased odds of an adverse effect (odds ratio) are small.

An example of abusing the data: There's good reason to be concerned about acetaminophen use during pregnancy. The purpose of this discussion is not to cast doubt on that issue, but rather to highlight how to not go about the study of it: Maternal use of acetaminophen during pregnancy and neurobehavioral problems in offspring at 3 years: A prospective cohort study Child behavioral problems were measured at the age of 3 years, using the 7 syndrome scale scores from the Child Behavior Checklist (CBCL) for ages 1 ½ to 5. They tested for 7 different adverse effects individually, instead of one overall high score. After adjustment for prenatal stress and other confounders, 2 syndrome scales remained significantly higher in children exposed to acetaminophen: sleep problems (aOR = 1.23, 95% CI = 1.01–1.51) and attention problems (aOR = 1.21, 95% CI = 1.01–1.45). They got a nominally positive result for 2 tests out of 7. Usually a p-value is given for such results as well as the confidence interval. This is notably absent - no p-values appear anywhere in this study. The 95% confidence interval (CI) means 95 times out of 100 this interval will contain the actual result - adjusted odds ratio (aOR) in this case. They adjusted the odds ratios to account for confounders such as psychosocial stress during pregnancy. which is good as far as it goes, but no adjustment was made for multiple tests. The confidence interval is calculated using a formula which includes the standard deviation and the number of samples. If the confidence interval is less than or equal to unity that means the test has failed statistical significance. Notice that in these cases the bottom of the reported confidence interval was 1.01, which is the lowest value which can be considered to be passing this test. An adjustment to the confidence interval for selecting 2 positive results from 7 tests is necessary. Unlike with p-values, it is not obvious how to do so, and online search was not helpful. Had any such adjustment been made, these results would have failed statistical significance. These odds ratios show a 20% increase in risk, which is not that high, unless you're one of the unlucky 20%. If that 20% number is real. It's time to sit up and take notice when a single test shows something like a doubling of risk, which will be the subject next month. Summary of statistical differences between weak and strong interventional studies: Item Weak Strong Number of adverse conditions tested Many 1 Number of positive results Few All p-value close to .05 or missing <<.05 Odds ratio ~1.2 >2 Low end of 95% confidence interval very close to 1.0 comfortable distance from 1.0 Kosability Is By And For People living with disabilities;

Who love someone with a disability; or

Who want to know more about the issues. Our discussions are open threads in the context of this community. Feel free to Comment on the diary topic;

Ask questions of the diarist or generally to everyone;

Share something you've learned; or

Gripe about your situation. Our only rule is to be kind. Bullies will be ignored and/or obliterated. For more elaboration on our group rule please read this story. At KosAbility we amicably discuss any and all matters pertaining to health. Our discussions are not medical advice. Medical advice can only be provided by a qualified physician who has examined the patient. If you have worrisome symptoms please see your doctor!

All health related discussion welcome!

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/7/27/2335204/-KosAbility-Observational-Studies-Do-Not-Be-Misled?pm_campaign=front_page&pm_source=latest_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/