Posted on 05/14/2014 at 04:28:38 PM by Suzanne PriceBy Colby Vorland
Southwest Airlines is consistently rated as serving good food on their flights, yet they don't serve food at all. Can we trust diet data if people don't know if they even ate? This amusing anecdote was offered by Dr. David Allison at the following session during ASN's Scientific Sessions in San Diego in April: “Not Everything That Counts Can be Counted and Not Everything That Can Be Counted Counts: How Should We Collect Dietary Data for Research?” chaired by Drs. Regan Bailey and Claire Zizza. The session was organized by ASN's Nutritional Epidemiology, Aging/Chronic Disease, and Community/Public Health RISs. The panel took a critical perspective but also emphasized the value in using self-reported diet intake data.
Dr. Allison was the first presenter, taking a hard position on self-reported energy intake in nutritional research: it just isn't good enough. Not only that, it often flat-out misleads obesity research. Allison highlighted a recent paper by Archer and colleagues that looked at energy intake of respondents in NHANES from 1971-2012, finding that 67.3% of women and 58.7% of men were not physiologically plausible - i.e. the number of calories is “incompatible with life.” Correlations with the IOM's gold standard equation for estimating total energy expenditure were 0.163 for women and 0.225 for men, effectively yielding no meaningful relationship. This “doesn't seem like science anymore,” Allison stated. This problem has been known for a long time: in 1991, Goldberg and others looked at 37 studies across 10 countries and found that over 65% of the mean ratio between reported energy intake and basal metabolic rate measures were implausible. Forrestal also published a review in 2010 of 28 papers looking specifically at children and adolescents, finding that about half misreport energy intake.
It is time to abandon self-reported energy intakes in favor of less misleading paths in obesity research, Allison said. It is not worthy of scientific use because the measurement errors are not random and modest, estimates are often not in the correct direction, and errors will not lead to the detection of false effects under plausible circumstances. He told a story of how originally, self-report intake data suggested that the overweight consumed less energy than they expended, but using more rigorous methods proved exactly the opposite to be true (here is a 1990 review by Schoeller). Allison said that we currently have no economic and social incentive to make a complete transition to incorporating doubly labeled water, as the cost has been flat since the 1980s. It will be painful initially, but clearly we need to make the transition.
Dr. Amy Subar argued that energy intake is not the only important aspect of diet data, and improvements are being made to collection methods, and therefore we shouldn't throw the baby out with the bathwater. Even if total energy intake isn't accurate, we still can track food patterns, diet quality, nutrient intakes, and social and physical environments. Subar emphasized the utility of self-reported data- it can yield more comprehensive data with much less of an investigator burden compared to biomarkers or observation, but there is the issue with error. Within-person variation and memory are 2 potential errors, but adjustments are possible. New technologies, such as keeping food records with mobile phones or wearable sensors to reduce reactivity to monitoring and burden, are being developed to improve self-report data. In addition, Subar has been involved in the development of self-administered 24-hour recalls - ASA24 - to be able to gather a lot more data from participants without investigator burden. They have validated the accuracy of this method against interviewer-administered recalls. Dr. Subar noted that food frequency questionnaires have more bias than short-term methods but combining multiple recalls with food frequency questionnaires could reduce this.
Dr. Elizabeth Yetley expanded on how self-reported diet data is currently relied on in national policy. For example, fortification strategies would not be possible without such data. Many considerations go into fortification, and modeling specific foods and evaluating the outcomes of such programs are important. The IOM uses diet data to track added sugars and salt disappearance. Nutrient safety can also be tracked. For example, data from the Total Diet Study in 1981 was able to quickly identify unexpected iodine sources in the food supply that were resulting in extremely high intakes. Diet data is also used for food additive/GRAS reviewing, to examine what has been added vs naturally occurring. Yetley states that there would be a significant adverse effect on policy if intake data wasn't available. However, intake data can fail to accurately predict nutrient status, as Pfeiffer et al. (2012) have demonstrated. In 1988, Lewis and colleagues showed that cola intake could be underestimated by about 50%, though Yetley notes that surveys have been improved since then. Iron fortified cereals also virtually always underestimate the actual intake when using the amount listed on the label. Self-reported intake using label data is therefore not accurate. Infrequently consumed foods such as alcohol beverages also cause problems in nutritional epidemiology. However, Dr. Yetley reiterated that it is still crucial for many uses and we can work to improve precision while using caution when interpreting.
Finally, Dr. Laurence Freedman discussed some studies that are being done to improve self-reported intake measurement. Freedman began by emphasizing that we can do validation for some nutrients by comparing to recovery of biological products; for example, using doubly-labeled water for energy expenditure, nitrogen for protein, potassium and sodium for themselves. The error is indicative of true intake. However, for many we don't have accurate recovery products. Freedman described a project he is involved in - the Validation Studies Pooling Project - that aims to better understand measurement errors of food frequency questionnaires and 24-hour recalls using recovery biomarkers. For example, in the AMPM study, energy intake is underreported on 24-hour recalls by about 10%, but underreporting of intake differs by nutrient. Measurement error effects diet-health outcomes by attenuating relative risks and statistical power. This attenuation is expressed as an “attenuation factor” - the ratio of attenuation to the actual value. Preliminary data shows that attenuation factors are more extreme for energy intake compared to protein, and protein density is less so than both. Adjusting datasets from energy intake alleviates attenuation factors somewhat but does not solve it, and increasing samples size does not itself solve it because of unknown confounding. Freedman went into more detail about the ASA24 (multiple 24-hour recalls) - emphasizing that they have a high response and low attrition. With 3 or more recalls, the attenuation factor for protein improves. Relative risks increase with additional recalls compared to 1 food frequency questionnaire, and combining both methods yields even better data according to Carroll and colleagues (2012). Combining biomarkers with self-report data improves statistical power because measurement error is reduced, as Freedman and others (2011) have shown. Dr. Freedman reiterated that self-report data is extremely useful for surveillance, education, dietary guidance apart from the difficulties of using it to measure energy intake.
It is clear that self-reported diet data has many important uses, but caution must be accepted when interpreting it. Hopefully improvements that are currently being validated will be adopted quickly, and for some measures such as energy intake, it seems necessary that current methods be abandoned because we know they are unacceptable.