American Society For Nutrition

Beyond the Abstract

Beyond the Abstract

Excellence in Nutrition Research and Practice
Posted on 07/23/2012 at 03:35:36 PM by Student Blogger
By Larry Istrail

The randomized clinical trial offers our best tool for determining the efficacy of an intervention as compared to another. Whether it is a drug trial or diet trial, the devil is in the details. Reading the abstract does not give you the whole story, and it can be profoundly misleading. For example, let us assess a popular randomized clinical trial(1) testing the efficacy of different diets, performed by some of the biggest names in diet and obesity research, such as Dr. Frank Sacks and Dr. George Bray. The conclusion of their study is simple: All diets are created equal.
This is the headline that makes it into the newspapers and becomes accepted as fact. Why question the results? Dr. Bray and Dr. Sacks and very well respected world-wide for their work; The New England Journal of Medicine is a prestigious journal. No sub-par work would get through the cracks.

When reading a dietary clinical trial, and assessing its internal validity, there are three major points to keep in mind:
•    How large was the difference between assigned exposures?

•    Is there any evidence that the study subjects followed the diet or intervention they were randomized to?

•    When assessing a study testing varying carbohydrate content, is there any difference in triglyceride levels between the groups?

Difference between assigned exposures

The nutrient goals for the four diet groups were:
•    Low fat, average protein - 20% fat , 15% protein, and 65% carbohydrates

•    Low fat, high protein - 20% fat, 25% protein, and 55% carbohydrates

•    High fat, average protein - 40% fat , 15% protein, and 45% carbohydrates

•    High fat, high protein -  40% fat, 25% protein, and 35% carbohydrates
Right away, this study design is somewhat worrisome, since the carbohydrate content of the diets are relatively similar. Determining a 10% difference in macronutrient content is likely unrealistic, since study subjects tend to cheat on their diets and the dietary assessments are subpar. This concern is further compounded by this seemingly innocuous sentence buried in the methods:

"Blinding was maintained by the use of similar foods for each diet."
At first glance this would be great. Blinding adds to the rigor of the study design, by keeping the study subjects in the dark as to which intervention they are receiving. However, in a diet study, this is very difficult. The only way this can be done properly is when foods are chemically modified to contain different nutrients, while still maintaining the same look, smell, and taste of the original food. This has been done beautifully in the Minnesota Coronary Survey(2) for example, testing the potential benefits of a low saturated fat diet. However, blinding a study comparing different percentages of carbohydrates, protein and fat is essentially impossible without making each intervention very similar.

Is there any evidence that the study subjects followed the diet or intervention they were randomized to? This is the giant elephant in the room of every dietary clinical trial. It is an enormous problem that nobody really talks about, and it is the major inspiration for developing PhotoCalorie(3). The "gold-standard" in dietary research in the year 2012 is pen and paper. We have machines that can literally look through your skin and see your organs and bones in vivid detail. We can tell who your parents are, simply by a drop of your spit. Yet when we study obesity, arguably the most important disease that plagues the world today, we use technology from 1812.

In this particular study, the gold standard was not used. They instead used a 24-hour recall twice throughout the 2 year study in 50% of the patients. In other words, out of the 730 days and 811 study subjects, the primary intervention was only measured on 6 days in 405 people.
SIX DAYS! 0.8% of the days. Assuming they ate 3 meals a day, this means out of 2,190 meals they ate, only 18 of them were reported! Compounding this unfortunate number is the fact that a 24-hour recall is far from perfect, and people tend to forget what they ate, and report foods deemed healthier more often.

Given all these limitations, here is what the study subjects reported eating. The left three columns correspond to the low fat, average protein group and the right three are for the low fat high protein groups, at 6 months and two years follow up:

FIGURE 1
Fig 1

As you can see, the macronutrient composition is virtually identical. At 2 years the low fat, average protein group was eating 1531 calories, 53% from carbs, 19.6% from protein, and 26.5% from fat. The low fat high protein group was eating 1560 calories, 51.3% from carbs, 20.8% from protein, and 28% from fat.

The authors conclude in the discussion that the  "principal finding is that the diets were equally successful in promoting clinically meaningful weight loss and the maintenance of weight loss over the course of 2 years." When you combine all these clues together - the blinded study design, the similar reported macronutrient intake, the identical triglycerides and HDL levels - you come to a dramatic conclusion that explains why all the groups were equally successful: They were all eating the exact same diet!

References
1.    http://www.ncbi.nlm.nih.gov/pubmed?term=sacks%20comparison%20of%20weight-loss
2.    http://atvb.ahajournals.org/content/9/1/129.abstract
3.    www.PhotoCalorie.com


3 Comments
Posted Jul 24, 2012 9:07 AM by Sam

Larry- thanks for the post. Very important topic. I'm not sure I agree with you that a difference of 10% in carbohydrates is unrealistic...in fact, I feel that these three diets are truly representative of variability in the population. They could have assigned a diet with 5% fat, 5% protein, and 90% CHO, but this would be unrealistic. I think the major issue, which you highlighted in your critique, is that the diets were not actually enforced and therefore the conclusions are unjustified. On another note, I think that your original purpose in choosing this topic was to provide an example of why it is inappropriate to take conclusions for granted without fully assessing the strength of the study. I would have liked you to take it one step further and discuss the ethical/social ramifications of this topic. For example, what level of responsibility does a journal such as NEJM have in filtering poorly designed (or methodologically-flawed) studies? What are the consequences of publishing such studies on the public's view of nutrition research in general and on the credibility of other researchers in the field?


Posted Jul 24, 2012 6:01 PM by Larry

Thanks for the comment. The reason 10% is too low, in my view, is based on the clinical trial data to date. Just about every time they test two exposures with a small difference in carbohydrate content (i.e. when they test high glycemic vs. low glycemic diets), the study is almost always null. Whether that is because a low glycemic diet is ineffective as compared to a high glycemic, or because both groups don't follow their diets properly, we can't know for sure. The dietary assessment "technology" right now is kind of pitiful, but in my opinion the reason there is no difference is the former.

In contrast, when they test a low carb diet vs the typical government recommended low fat high carb diet, they tend to lose more weight, increase their HDL and dec their triglycerides, despite not being told how much to eat. This has been true at least 14 times. Whether that is because of the satiating nature of high protein high fat diets, or there is less net insulin release in response to the absence of carbohydrates, it is pretty clear they lose more weight. I profile all these studies, both supportive and null here: http://www.awlr.org/carb-restricted-diets.html


Here's a study that seems to support your thesis here, Larry:

http://www.ncbi.nlm.nih.gov/pubmed/22215165