By: Matt T.
I've recently heard a lot of harsh criticism of epidemiology. I
hope to stir up a little bit of controversy. Please keep your
comments clean and feel free to voice disagreement, but be
prepared to be challenged to a duel as a result. I should warn
you that I have never lost a swordfight. (Whether I have won any
swordfights is another question all together...)
Which of the following is necessary to establish causation?
(I know this seems easy, but humor me...)
a. A correlation
b. An experiment
c. A randomized, controlled trial
Most probably answered c. We've got it firmly in our heads that “correlation does not imply causation.” The question is a little sneaky, however, and a is actually correct. Now wait, before you start throwing rotten lycopene rockets at me, let me clarify. It would be more complete to say that correlation is necessary but not sufficient to establish causation. To infer that x causes y, there must obviously be a correlation, but this by itself is not enough.
What exactly is necessary? Determining causation is a philosophical question at heart, with varying requirements in different branches of science. In medicine, causal reasoning is dominated by the thinking of Hill, 1965 (a classic everyone in life science should read, IMHO).
Table. Hill's criteria for resolving association from causal connection in epidemiology
Hill never proposed all these must be satisfied. On the contrary, one conspicuous factor may establish causation in the absence of any other. Hill notes remarkable specificity of nose/lung cancer to nickel miners led to public health action, even though most other criteria could not be ascertained.
Many cause and effect relationships we take for granted were established by careful epidemiology. How many would question a causal connection between smoking and lung cancer, in spite of conspicuous absence of long term, randomized, controlled trials?
Hill was focused on epidemiology, but can't we apply these principles to science in a broader sense? Perhaps the difference between approaches to science is not that any one paradigm is intrinsically more valid, but rather that each is well suited to provide different information about the possibility of a causal relationship.
Careful epidemiology demonstrates strength and consistency in free-living human beings, at realistic levels of exposure; however we can never be completely sure we've adjusted for all potential sources of confounding. Conversely, animal and in vitro models establish plausible mechanism, but we can never be completely sure that this physiology is generalizable to people. In particular, knock-outs and gene manipulations characterize protein function in a way no other model can; however we are measuring a system that has been fundamentally altered. In short, we are no longer studying normal physiology, but physiology of a system existing nowhere else in nature.
Clinical trials combine study of real human systems with experimentation and, if randomization does its job correctly, distribution and 'washing out' of all confounding variables among treatment groups. Even these, however, are fundamentally flawed: Informed consent and voluntary participation mean we are studying a unique group – one that is usually more health minded than the general population. Poor compliance and high attrition rates are also pervasive and limiting. Finally, it is prohibitively expensive to conduct RCTs sufficiently long to establish effects of diet accumulating over decades. Indeed, one wonders if millions upon millions of dollars are well spent in large, long-term trials that are often so burdened with noncompliance, attrition and self-selection bias that results are just as disputable as those of less expensive cohort studies.
I firmly believe that the best answer to these problems is to
avoid placing too much weight on any one research paradigm. By
combining various modes of inquiry, we gain the best from each,
and take great strides toward satisfying the requirement for
consistency: when epidemiology, animal or in vitro work and
cost-efficient clinical trials agree, we may feel very confident
in our defense of a causal connection. This confidence comes not
from any one type of study, but from the fact that they each
suggest the same conclusion. Regardless of your views on methods,
I think we can all agree that we as scientists are
interdependent. In the end, it takes many points of view to