In Defense of Food Page 7
What’s going on here? We don’t know. It could be the vagaries of human digestion. Maybe the fiber (or some other component) in a carrot protects the antioxidant molecule from destruction by stomach acids early in the digestive process. Or it could be we isolated the wrong antioxidant. Beta is just one of a whole slew of carotenes found in common vegetables; maybe we focused on the wrong one. Or maybe beta-carotene works as an antioxidant only in concert with some other plant chemical or process; under other circumstances it may behave as a pro-oxidant.
Indeed, to look at the chemical composition of any common food plant is to realize just how much complexity lurks within it. Here’s a list of just the antioxidants that have been identified in a leaf of garden-variety thyme:
alanine, anethole essential oil, apigenin, ascorbic acid, beta-carotene, caffeic acid, camphene, carvacrol, chlorogenic acid, chrysoeriol, derulic acid, eriodictyol, eugenol, 4-terpinol, gallic acid, gamma-terpinene, isichlorogenic acid, isoeugenol, isothymonin, kaemferol, labiatic acid, lauric acid, linalyl acetate, luteolin, methionine, myrcene, myristic acid, naringenin, rosmarinic acid, selenium, tannin, thymol, trytophan, ursolic acid, vanillic acid.
This is what you ingest when you eat food flavored with thyme. Some of these chemicals are broken down by your digestion, but others go on to do various as-yet-undetermined things to your body: turning some gene’s expression on or off, perhaps, or intercepting a free radical before it disturbs a strand of DNA deep in some cell. It would be great to know how this all works, but in the meantime we can enjoy thyme in the knowledge that it probably doesn’t do any harm (since people have been eating it forever) and that it might actually do some good (since people have been eating it forever), and even if it does nothing at all, we like the way it tastes.
It’s important also to remind ourselves that what reductive science can manage to perceive well enough to isolate and study is subject to almost continual change, and that we have a tendency to assume that what we can see is the important thing to look at. The vast attention paid to cholesterol since the 1950s is largely the result of the fact that for a long time cholesterol was the only factor linked to heart disease that we had the tools to measure. (This is sometimes called parking-lot science, after the legendary fellow who loses his keys in a parking lot and goes looking for them under the streetlight-not because that’s where he lost them but because that’s where it’s easiest to see.) When we learned how to measure different types of cholesterol, and then triglycerides and C-reactive protein, those became the important components to study. There will no doubt be other factors as yet unidentified. It’s an old story: When Prout and Liebig nailed down the macronutrients, scientists figured that they now understood the nature of food and what the body needed from it. Then when the vitamins were isolated a few decades later, scientists thought, okay, now we really understand food and what the body needs for its health; and today it’s the polyphenols and carotenoids that seem to have completed the picture. But who knows what else is going on deep in the soul of a carrot?
The good news is that, to the carrot eater, it doesn’t matter. That’s the great thing about eating foods as compared with nutrients: You don’t need to fathom a carrot’s complexity in order to reap its benefits.
The mystery of the antioxidants points up the danger in taking a nutrient out of the context of food; scientists make a second, related error when they attempt to study the food out of the context of the diet. We eat foods in combinations and in orders that can affect how they’re metabolized. The carbohydrates in a bagel will be absorbed more slowly if the bagel is spread with peanut butter; the fiber, fat, and protein in the peanut butter cushion the insulin response, thereby blunting the impact of the carbohydrates. (This is why eating dessert at the end of the meal rather than the beginning is probably a good idea.) Drink coffee with your steak, and your body won’t be able to fully absorb the iron in the meat. The olive oil with which I eat tomatoes makes the lycopene they contain more available to my body. Some of those compounds in the sprig of thyme may affect my digestion of the dish I add it to, helping to break down one compound or stimulate production of an enzyme needed to detoxify another. We have barely begun to understand the relationships among foods in a cuisine.
But we do understand some of the simplest relationships among foods, like the zero-sum relationship: If you eat a lot of one thing, you’re probably not eating a lot of something else. This fact alone may have helped lead the diet-heart researchers astray. Like most of us, they assumed that a bad outcome like heart disease must have a bad cause, like saturated fat or cholesterol, so they focused their investigative energies on how these bad nutrients might cause disease rather than on how the absence of something else, like plant foods or fish, might figure in the etiology of the disease. Nutrition science has usually put more of its energies into the idea that the problems it studies are the result of too much of a bad thing instead of too little of a good thing. Is this good science or nutritionist prejudice? The epidemiologist John Powles has suggested this predilection is little more than a Puritan bias: Bad things happen to people who eat bad things.
But what people don’t eat may matter as much as what they do. This fact could explain why populations that eat diets containing lots of animal food generally have higher rates of coronary heart disease and cancer than those that don’t. But nutritionism encouraged researchers to look beyond the possibly culpable food itself-meat-to the culpable nutrient in the meat, which scientists have long assumed to be the saturated fat. So they are baffled indeed when large dietary trials like the Women’s Health Initiative and the Nurses’ Health Study fail to find evidence that reducing fat intake significantly reduces the incidence of heart disease or cancer.
Of course thanks to the low-fat-diet fad (inspired by the same reductionist hypothesis about fat), it is entirely possible to slash your intake of saturated fat without greatly reducing your consumption of animal protein: Just drink the low-fat milk, buy the low-fat cheese, and order the chicken breast or the turkey bacon instead of the burger. So did the big dietary trials exonerate meat or just fat? Unfortunately, the focus on nutrients didn’t tell us much about foods. Perhaps the culprit nutrient in meat and dairy is the animal protein itself, as some researchers hypothesize. (The Cornell nutritionist T. Colin Campbell argues as much in his recent book, The China Study.) Others think it could be the particular kind of iron in red meat (called heme iron) or the nitrosamines produced
when meat is cooked. Perhaps it is the steroid growth hormones typically present in the milk and meat; these hormones (which occur naturally in meat and milk but are often augmented in industrial production) are known to promote certain kinds of cancer.
Or, as I mentioned, the problem with a meat-heavy diet might not even be the meat itself but the plants that all that meat has pushed off the plate. We just don’t know. But eaters worried about their health needn’t wait for science to settle this question before deciding that it might be wise to eat more plants and less meat. This of course is precisely what the McGovern committee was trying to tell us.
The zero-sum fallacy of nutrition science poses another obstacle to nailing down the effect of a single nutrient. As Gary Taubes points out, it’s difficult to design a dietary trial of something like saturated fat because as soon as you remove it from the trial diet, either you have dramatically reduced the calories in that diet or you have replaced the saturated fat with something else: other fats (but which ones?), or carbohydrates (but what kind?), or protein. Whatever you do, you’ve introduced a second variable into the experiment, so you will not be able to attribute any observed effect strictly to the absence of saturated fat. It could just as easily be due to the reduction in calories or the addition of carbohydrates or polyunsaturated fats. For every diet hypothesis you test, you can construct an alternative hypothesis based on the presence or absence of the substitute nutrient. It gets messy.
And then there is the placebo effect, which has always bedeviled nutrition research. About a third of Americans are what researchers call responders-people who will respond to a treatment or intervention regardless of whether they’ve actually received it. When testing a drug you can correct for this by using a placebo in your trial, but how do you correct for the placebo effect in the case of a dietary trial? You can’t: Low-fat foods seldom taste like the real thing, and no person is ever going to confuse a meat entrйe for a vegetarian substitute.
Marion Nestle also cautions against taking the diet out of the context of the lifestyle, a particular hazard when comparing the diets of different populations. The Mediterranean diet is widely believed to be one of the most healthful traditional diets, yet much of what we know about it is based on studies of people living in the 1950s on the island of Crete-people who in many respects led lives very different from our own. Yes, they ate lots of olive oil and more fish than meat. But they also did more physical labor. As followers of the Greek Orthodox church, they fasted frequently. They ate lots of wild greens-weeds. And, perhaps most significant, they ate far fewer total calories than we do. Similarly, much of what we know about the health benefits of a vegetarian diet is based on studies of Seventh-Day Adventists, who muddy the nutritional picture by abstaining from alcohol and tobacco as well as meat. These extraneous but unavoidable factors are called, aptly, confounders.
One last example: People who take supplements are healthier than the population at large, yet their health probably has nothing whatsoever to do with the supplements they take-most of which recent studies have suggested are worthless. Supplement takers tend to be better educated, more affluent people who, almost by definition, take a greater than usual interest in personal health-confounders that probably account for their superior health.
But if confounding factors of lifestyle bedevil epidemiological comparisons of different populations, the supposedly more rigorous studies of large American populations suffer from their own arguably even more disabling flaws. In ascending order of supposed reliability, nutrition researchers have three main methods for studying the impact of diet on health: the case-control study, the cohort study, and the intervention trial. All three are seriously flawed in different ways.
In the case-control study, researchers attempt to determine the diet of a subject who has been diagnosed with a chronic disease in order to uncover its cause. One problem is that when people get sick they may change the way they eat, so the diet they report may not be the diet responsible for their illness. Another problem is that these patients will typically report eating large amounts of whatever the evil nutrient of the moment is. These people read the newspaper too; it’s only natural to search for the causes of one’s misfortune and, perhaps, to link one’s illness to one’s behavior. One of the more pernicious aspects of nutritionism is that it encourages us to blame our health problems on lifestyle choices, implying that the individual bears ultimate responsibility for whatever illnesses befall him. It’s worth keeping in mind that a far more powerful predictor of heart disease than either diet or exercise is social class.
Long-term observational studies of cohort groups such as the Nurses’ Health Study represent a big step up in reliability from the case-control study. For one thing, the studies are prospective rather than retrospective: They begin tracking subjects before they become ill. The Nurses’ Study, which has collected data on the eating habits and health outcomes of more than one hundred thousand women over several decades (at a cost of more than one hundred million dollars), is considered the best study of its kind, yet it too has limitations. One is its reliance on food-frequency questionnaires (about which more in a moment). Another is the population of nurses it has chosen to study. Critics (notably Colin Campbell) point out that the sample is relatively uniform and is even more carnivorous than the U.S. population as a whole. Pretty much everyone in the group eats a Western diet. This means that when researchers divide the subject population into groups (typically fifths) to study the impact of, say, a low-fat diet, the quintile eating the lowest-fat diet is not all that low-or so dramatically different from the quintile consuming the highest-fat diet. “Virtually this entire cohort of nurses is consuming a high-risk diet,” according to Campbell. That might explain why the Nurses’ Study has failed to detect significant benefits for many of the dietary interventions it’s looked at. In a subject population that is eating a fairly standard Western diet, as this one is, you’re never going to capture the effects, good or bad, of more radically different ways of eating. (In his book, Campbell reports Walter Willett’s personal response to this criticism: “You may be right, Colin, but people don’t want to go there.”)
The so-called gold standard in nutrition research is the large-scale intervention study. In these studies, of which the Women’s Health Initiative is the biggest and best known, a large population is divided into two groups. The intervention group changes its di
et in some prescribed way while the control group (one hopes) does not. The two groups are then tracked over many years to learn whether the intervention affects relative rates of chronic disease. In the case of the Women’s Health Initiative study of dietary fat, a $415 million undertaking sponsored by the National Institutes of Health, the eating habits and health outcomes of nearly forty-nine thousand women (aged fifty to seventy-nine) were tracked for eight years to assess the impact of a low-fat diet on a woman’s risk of breast and colorectal cancer and cardiovascular disease. Forty percent of the women were told to reduce their consumption of fat to 20 percent of total calories. When the results were announced in 2006, it made front-page news (The New York Times headline said LOW-FAT DIET DOES NOT CUT HEALTH RISKS, STUDY FINDS) and the cloud of nutritional confusion beneath which Americans endeavor to eat darkened further.