Taubes: Nutrition and Obesity "Research" Is Generally *Not* Science
[E]very time in the past that these researchers had claimed that an association observed in their observational trials was a causal relationship, and that causal relationship had then been tested in experiment, the experiment had failed to confirm the causal interpretation -- i.e., the folks from Harvard got it wrong. Not most times, but every time. No exception. Their batting average circa 2007, at least, was .000.
...
I never used the word scientist to describe the people doing nutrition and obesity research, except in very rare and specific cases. Simply put, I donât believe these people do science as it needs to be done; it would not be recognized as science by scientists in any functioning discipline.
Science is ultimately about establishing cause and effect. Itâs not about guessing. You come up with a hypothesis -- force x causes observation y -- and then you do your best to prove that itâs wrong. If you canât, you tentatively accept the possibility that your hypothesis was right. Peter Medawar, the Nobel Laureate immunologist, described this proving-itâs-wrong step as the âthe critical or rectifying episode in scientific reasoning.â Hereâs Karl Popper saying the same thing: âThe method of science is the method of bold conjectures and ingenious and severe attempts to refute them.â The bold conjectures, the hypotheses, making the observations that lead to your conjectures⦠thatâs the easy part. The critical or rectifying episode, which is to say, the ingenious and severe attempts to refute your conjectures, is the hard part. Anyone can make a bold conjecture. (Hereâs one: space aliens cause heart disease.) Making the observations and crafting them into a hypothesis is easy. Testing them ingeniously and severely to see if theyâre right is the rest of the job -- say 99 percent of the job of doing science, of being a scientist.
...
[B]ecause this is supposed to be a science, we ask the question whether we can imagine other less newsworthy explanations for the association weâve observed. What else might cause it? An association by itself contains no causal information. There are an infinite number of associations that are not causally related for every association that is, so the fact of the association itself doesnât tell us much.
...
[A]s we move from the bottom quintile of meat-eaters (those who are effectively vegetarians) to the top quintile of meat-eaters we see an increase in virtually every accepted unhealthy behavior -- smoking goes up, drinking goes up, sedentary behavior (or lack of physical activity) goes up -- and we also see an increase in markers for unhealthy behaviors -- BMI goes up, blood pressure, etc. So what could be happening here?
...
[P]eople who comply with their doctorsâ orders when given a prescription are different and healthier than people who donât. This difference may be ultimately unquantifiable. The compliance effect is another plausible explanation for many of the beneficial associations that epidemiologists commonly report, which means this alone is a reason to wonder if much of what we hear about what constitutes a healthful diet and lifestyle is misconceived.
...
[W]henever epidemiologists compare people who faithfully engage in some activity with those who donât -- whether taking prescription pills or vitamins or exercising regularly or eating what they consider a healthful diet -- the researchers need to account for this compliance effect or they will most likely infer the wrong answer. Theyâll conclude that this behavior, whatever it is, prevents disease and saves lives, when all theyâre really doing is comparing two different types of people who are, in effect, incomparable.
...
[O]bservational studies may have inadvertently focused their attention specifically on, as Jerry Avorn says, the âGirl Scouts in the group, the compliant ongoing users, who are probably doing a lot of other preventive things as well.â
...
Itâs this compliance effect that makes these observational studies the equivalent of conventional wisdom-confirmation machines.
...
So when we compare people who ate a lot of meat and processed meat in this period to those who were effectively vegetarians, weâre comparing people who are inherently incomparable. Weâre comparing health conscious compliers to non-compliers; people who cared about their health and had the income and energy to do something about it and people who didnât. And the compliers will almost always appear to be healthier in these cohorts because of the compliance effect if nothing else. No amount of âcorrectingâ for BMI and blood pressure, smoking status, etc. can correct for this compliance effect, which is the product of all these health conscious behaviors that canât be measured, or just havenât been measured. And we know this because theyâre even present in randomized controlled trials. When the Harvard people insist they can âcorrectâ for this, or that itâs not a factor, theyâre fooling themselves. And we know theyâre fooling themselves because the experimental trials keep confirming that.
...
This is why the best epidemiologists -- the oneâs I quote in the NYT Magazine article -- think this nutritional epidemiology business is a pseudoscience at best. Observational studies like the Nursesâ Health Study can come up with the right hypothesis of causality about as often as a stopped clock gives you the right time. Itâs bound to happen on occasion, but thereâs no way to tell when that is without doing experiments to test all your competing hypotheses. And what makes this all so frustrating is that the Harvard people donât see the need to look for alternative explanations of the data -- for all the possible confounders -- and to test them rigorously, which means they donât actually see the need to do real science.
...
Now weâre back to doing experiments -- i.e., how we ultimately settle this difference of opinion. This is science. Do the experiments.
...
So we do a randomized-controlled trial. Take as many people as we can afford, randomize them into two groups -- one that eats a lot of red meat and bacon, one that eats a lot of vegetables and whole grains and pulses-and very little red meat and bacon -- and see what happens. These experiments have effectively been done. Theyâre the trials that compare Atkins-like diets to other more conventional weight loss diets -- AHA Step 1 diets, Mediterranean diets, Zone diets, Ornish diets, etc. These conventional weight loss diets tend to restrict meat consumption to different extents because they restrict fat and/or saturated fat consumption and meat has a lot of fat and saturated fat in it. Ornishâs diet is the extreme example. And when these experiments have been done, the meat-rich, bacon-rich Atkins diet almost invariably comes out ahead, not just in weight loss but also in heart disease and diabetes risk factors. I discuss this in detail in chapter 18 of Why We Get Fat, âThe Nature of a Healthy Diet.â The Stanford A TO Z Study is a good example of these experiments. Over the course of the experiment -- two years in this case -- the subjects randomized to the Atkins-like meat- and bacon-heavy diet were healthier. Thatâs what we want to know.
Now Willett and his colleagues at Harvard would challenge this by saying somewhere along the line, as we go from two years out to decades, this health benefit must turn into a health detriment. How else can they explain why their associations are the opposite of what the experimental trials conclude? And if they donât explain this away somehow, they might have to acknowledge that theyâve been doing pseudoscience for their entire careers. And maybe theyâre right, but I certainly wouldnât bet my life on it.
Ultimately weâre left with a decision about what weâre going to believe: the observations, or the experiments designed to test those observations. Good scientists will always tell you to believe the experiments. Thatâs why they do them.
...
Conventional methods assume all errors are random and that any modeling assumptions (such as homogeneity) are correct. With these assumptions, all uncertainty about the impact of errors on estimates is subsumed within conventional standard deviations for the estimates (standard errors), such as those given in earlier chapters (which assume no measurement error), and any discrepancy between an observed association and the target effect may be attributed to chance alone. When the assumptions are incorrect, however, the logical foundation for conventional statistical methods is absent, and those methods may yield highly misleading inferences.
...
Systematic errors can be and often are larger than random errors, and failure to appreciate their impact is potentially disastrous.
Via Science, Pseudoscience, Nutritional Epidemiology, and Meat.