Tuesday, October 28, 2008

Book Review: The Jungle Effect, by Daphne Miller, M.D.

Our local library recently began advertising a talk by Dr. Daphne Miller, author of The Jungle Effect. The essential concept of the book is to examine epidemiological "cold spots" for various modern diseases, such as diabetes, heart disease, depression, and cancer. These cold spots are areas notably low in the incidence of said diseases. Dr. Miller visited each area and studied the local cuisines, with the idea that food is a driving force behind development of these diseases which often show the highest incidence amongst those eating a modern Western diet.

I was quite excited when I first read about The Jungle Effect. One reviewer went so far as to dub Dr. Miller the modern equivalent of Weston Price. I'm a big fan of Price's work as a shining example of the application of the scientific method and what can be discovered with limited resources and a determined rational mind. I also believe cultural wisdom that has stood the test of time deserves to be weighed along with more "scientific" evidence, and of course am in favor of whole, nutrient-dense foods (who isn't?). As I read The Jungle Effect, however, my enthusiasm waned and frustration set in. While I do believe that the foods put forth by Dr. Miller would constitute a much healthier diet than eaten by most in modernized society, the scientific rationale falls well short, and I believe leads to confusion and complication that is both unnecessary and unjustified.

Let's start with the good. First and foremost, Dr. Miller is an excellent writer, and clearly passionate about helping people improve their health. The Jungle Effect covers five different traditional cuisines associated with disease cold spots:
  • Copper Canyon, Mexico (the Tarahumara): Diabetes
  • Crete: Heart Disease
  • Iceland: Depression
  • Cameroon: Bowel Trouble
  • Okinawa: Breast and Prostate Cancer
Dr. Miller's descriptions of these cultures and their cuisines are vivid and fascinating. Many of the recipes sound very tasty, and despite the often relatively high carbohydrate content (often from honey, sugar, or maple syrup) are probably considerably more healthy than the average Western diet. Indeed, the patients for which she "prescribed" these diets apparently saw positive results, e.g. in reducing blood sugar, blood pressure, weight, depression symptoms, etc. So there is clearly some upside here, at least compared with the diet and health of the average American.

But in the end, The Jungle Effect suffers from some fundamental flaws. My criticisms here are meant to be constructive. As noted in the last post, sharing information is important if we hope to understand and sort out differences in our beliefs. Dr. Miller's approach is "traditional" not only in searching out elements of indigenous cuisines, but also in adhering to the nutritional orthodoxy. She obviously received the usual medical training, and consulted a nutritionist while writing the book. But many of you reading this realize that widely-held beliefs about the health effects of various foods are constructed on evidence that is weak at best, with contradictory evidence often being ignored. This results in some clear cognitive dissonance. For instance, Dr. Miller briefly discusses the Inuit (Eskimos) and the high level of health maintained on their traditional diet consisting almost entirely of high-fat meat. Yet much of the rest of the book warns against the dangers of red meat and too much fat, particularly saturated fat. I suspect there's also some confirmation bias at work in her selection of which cultures to study. One wonders why she didn't visit the Inuit, Masai, or for that matter the Namgis tribe featured in the excellent documentary My Big Fat Diet. The latter case is particularly interesting, as the Namgis who returned to a traditional high-fat diet experienced rapid and major health improvements, considerably more dramatic than those described for Dr. Miller's patients. Evidence modifies beliefs, but our beliefs should not cause us to filter the evidence.

Dr. Miller does briefly attempt to explain away the apparent Inuit "paradox" by noting the wild animals traditionally eaten themselves eat nutrient dense food, and pass that nutrition along to predators. But then the best one could say is that it isn't meat per se that is unhealthy, just meat fed an unnatural diet (Dr. Miller generally recommends very low meat intake, particularly red meat, usually without distinguishing the source). Give her credit for singing the praises of nutrient-dense organ meats. She also gets some points for not invoking genetics as the root of the apparent paradox. If you ever feel the urge to do this, then you should undertake the following procedure:
  1. Find a friend with some heavy boots.
  2. Have them kick you in the backside.
  3. After each kick, say the following: "I will not blame the observed failure of my hypothesis on unobserved genetic factors."
  4. Repeat 2-4 until rationality sets in.
The Jungle Effect would have benefited from deeper critical thinking, rather than absolute reliance on consensus (see Galileo quote at the top of the page). One aspect of this would be a broader sample of indigenous cuisines, including those high in animal products. It would also have been nice to see some integrative analysis of the different diets. Weston Price studied a wide variety of indigenous cultures, and was able to distill some common nutritional factors resulting in robust health, even so far as to discover a new fat soluble vitamin (which he called Activator X, now thought to be Vitamin K2). I would like to know, for example, how the health of the various cultures compared across the spectrum of diseases. Cameroon may have low incidence of colon cancer, but how do their rates of diabetes, heart disease, etc. compare to the other cultures described in the book? For that matter, one would like to compare traditional Inuit on that basis as well, to include the full spectrum of dietary macronutrient compositions.

Indeed, Dr. Miller notes that the Tarahumara, while notably free of diabetes, are not particularly healthy otherwise. Following Weston Price's cue, I looked for information on Tarahumara dental health, the idea being that dental health is a reflection of overall health, certainly of the status of vitamins involved in immune support and mineral metabolism. I'd be interested in getting Dr. Miller's view, since she was on the spot and got to observe the Tarahumara. I did find this article, which implies that the traditional Tarahumara suffer from significant dental disease, though it's pretty thin on details. We might also compare visually them with a hunter-gatherer. Here's a photo of an indigenous Tarahumaran, who according to Dr. Miller subsists largely on corn, beans, squash, and relatively little meat. Compare with a Kalapalo tribesman, who eats a lot of fish (they don't hunt animals) and jungle fruits and vegetables. Draw your own conclusions. Personally, I wish I had some Kalapalo biceps.

The nutritional context could also have been expanded in time. Modern indigenous diets do not necessarily represent the evolutionary diet of humans. Those sampled by Dr. Miller all relied heavily on agriculture, yet those foods available through agriculture have been part of the human diet for a very short time, evolutionarily speaking. The paleo-anthropological evidence certainly indicates that the pre-agricultural diet often relied heavily on meat from large mammals, and there are some clear markers of health decline at the agricultural boundary (decreased skeletal stature and skull size, evidence of mineral deficiencies, tooth decay). The modern indigenous diets likely evolved from other influences beyond evolution: extinction of large prey animals, geopolitical forces, changing climate, etc. Over time, these cultures may have identified the healthiest combinations of whatever foods were available, but that doesn't mean the available foods are the healthiest as defined by human evolutionary heritage. So again, it's necessary to consider all available evidence, not just that which agrees with our preconceptions.

Let's discuss some of the key nutritional beliefs that underpin Dr. Miller's arguments. Despite being widely held, a critical examination shows that the actual scientific evidence for many of these beliefs is thin at best, and often contradictory. Gary Taubes' Good Calories, Bad Calories (GCBC) gives a broad critical examination of the evidence, so there's no need to go into depth here (I suspect Dr. Miller has not read GCBC - if I can catch up with her at the talk this Sunday, I'll offer her a copy). Start with saturated fat. Obviously it is generally thought that saturated fat is a causal factor in many diseases, from diabetes to cancer to heart disease. Dr. Miller cites Ancel Keys work as evidence for this, but the problems with Keys' studies are well documented in GCBC and in many other places. The biggest one, of course, is that epidemiological research such as the Seven Countries Study can only show statistical associations, not causation, and are susceptible to counfounding from the large number of uncontrolled variables. Other epidemiological studies (like Framingham) have shown the opposite of Keys' conclusions, and to my knowledge there are few (if any) controlled studies that illustrate any causal connection between saturated fat and disease. Stephan is starting a new series at Whole Health Blog on this topic, which I recommend.

Apart from experimental evidence, one would also like to have a plausible mechanism by which saturated fat causes disease. As I discussed here, despite a half-decade of research, nobody has any idea how it is that saturated fat leads to heart disease. "Experts" continue to pound on the health evils of saturated fat, despite evidence that is weak at best and generally contradictory. Dr. Miller calls out saturated fat as a pro-oxidant food "known to cause oxidative stress". Again, we see uncritical acceptance of the consensus. Saturated fats are LESS susceptible to oxidation than unsaturated fats, particularly polyunsaturated fats from vegetable oils, flax, etc. From simple chemistry, one expects polyunsaturates to induce substantially more oxidative stress (credit Dr. Miller with recommending limited vegetable oil intake, though I don't agree with her recommendations for taking flax oil). And of course the human body preferentially manufactures saturated fat from excess sugars, likely an evolutionary response. Were saturated fat to actually increase oxidative stress, one would have to hypothesize some mechanism by which the body preferentially oxidizes it, or by which saturated fat induces some other biological response that leads to oxidative stress. How or why an organism would evolve such responses escapes me, and I'm not aware that any have been experimentally or even theoretically identified.

Let's compare the utter lack of hypotheses tying saturated fat to heart disease with an alternative: athersclerosis is at least partially caused by excess sugar and polyunsaturated fat. Here's the rationale (discussed in several places on the web):
  • Fats are transported in the blood in lipoproteins.
  • Lipoproteins tend to embed themselves at places where arteries sustain damage, such as branches (veins, under considerable lower pressure, do not exhibit atherosclerosis).
  • Macrophages have specific receptors for LDL that has been damaged by oxidation or glycation, but no receptor for undamaged LDL. Oxidized/glycated LDL is consumed by the macrophages, which can lead to accumulation of "foam cells" forming the atherosclerotic plaque.
  • Consumption of polyunsaturated fat promotes oxidation of LDL. Lipoproteins consist of a large protein coat, interspersed with phospholipids (two fatty acids on a phosphate backbone). If the phospholipid contains a polyunsaturated fatty acid, it is more susceptible to oxidation.
  • Increased blood sugar, either via consumption of large amounts of refined carbohydrates or due to metabolic dysfunction (e.g. insulin resistance) increase the potential for glycation damage of LDL, where the sugar binds to the protein and alters its structure in a manner similar to oxidation.
  • So, PUFA + sugar = damaged LDL = inflammatory response = atherosclerosis.
Now, I'm not saying this is the answer, but it at least makes logical sense, follows currently accepted beliefs about chemistry and human biology, and I would guess explains the epidemiological evidence at least as well as the saturated fat hypothesis (which never was particularly clear to start with). It's certainly better than "we have no idea", as put forth in textbooks and by "experts". More research is needed to really nail things down, but I think we can see that the PUFA/sugar hypothesis, which has some basis in accepted science, should receive a greater weight than the saturated fat hypothesis, which has no basis whatsoever.

Another issue is dietary fiber. Dr. Miller is a big fan for the usual reasons: fiber makes you feel full, scrapes the inside of your colon, etc. The satiety argument requires a very narrow view of appetite regulation, as I discuss here and here. It is true that the mechanical distension of the stomach contributes to satiety, probably both via nervous system signals and suppression of ghrelin secretion. But those are two of many other nervous and hormonal signals indicating the macronutrient and energy content of food, energy availability in the body, etc. Fiber has no effect on these other aspects. That's to be expected - otherwise we'd be able to eat only highly fibrous food with little energy content, feel full, and wind up starving to death. Not a very good evolutionary strategy.

Dr. Miller discusses the origins of the fiber hypothesis, from Denis Burkitt's work in Africa. Taubes gives a more detailed history in GCBC. It is interesting to note that Burkitt's hypothesis originated from Peter Cleave's "saccharine-disease hypothesis", namely that refined carbohydrates were at the root of a host of modern diseases. Burkitt was initially impressed with Cleave's work, noting that Cleave possessed "perceptive genius, persuasive argument and irrefutable logic." But over time he modified the argument to accent the absence of fiber rather than the presence of refined carbohydrates. Now, there's nothing wrong with this hypothesis per se, but one must be aware that testing refined vs. unrefined carbohydrate in the diet does not distinguish between these hypotheses: unrefined carbohydrate has more fiber. And there is other evidence that absence of fiber is not the health issue it's made out to be. Says Taubes:

Burkitt and Trowell called their fiber hypothesis a "major modification" of Cleave's ideas, but the never actually addressed the reasons why Cleave had identified refined carbohydrates as the problem to begin with: How to explain the absence of these chronic diseases in cultures whose traditional diets contained predominantly fat and protein and little or no plant foods and thus little or no fiber - the Masai and the Samburu, the Native Americans of the Great Plains, the Inuit? And why did chronic diseases begin appearing in these populations only with the availability of Western diets, if they weren't eating copious fiber prior to this nutrition transition? Trowell did suggest, as Keys had, that the experience of these populations might be irrelevant to the rest of the world. "Special ethnic groups like the Eskimos," he wrote, "adapted many millenia ago to special diets, which in other groups, not adapted to these diets, might induce disease." Trowell spent three decades in Kenya and Uganda administering to the Masai and other nomadic tribes, Burkitt had spent two decades there, and yet that was extent of the discussion.


Sounds like Keys, Burkitt, and Trowell could all use the boot treatment I described above. Taubes' discussion highlights another related hypothesis, namely that red meat is bad, for which the argument at least partially stems from the fiber hypothesis. We can't distinguish between the unrefined carbohydrate and fiber hypotheses by exchanging refined for unrefined carbohydrates, but we can distinguish between red meat vs. fiber hypothesis by exchanging red meat for whole fruits, vegetables, and grains. Taubes addresses this as well:

By the end of the 1990s, clinical trials and large-scale prospective studies had demonstrated that the dietary fat and fiber hypotheses of cancer were almost assuredly wrong, and similar investigations had repeatedly failed to confirm that red meat played any role* (*Those clinical trials that tested the dietary-fat-and-fiber hypotheses of cancer, as we discussed earlier, replaced red meat in the experimental diets with fruits, vegetables, and whole grains. When these trials failed to confirm that fat causes breast cancer, or that fiber prevents colon cancer, they also failed to confirm the hypothesis that red-meat consumption plays a role in either.) Meanwhile, cancer researchers had failed to identify any diet-related carcinogens or mutagens that could account for any of the major cancers. But cancer epidemiologists made little attempt to derive alternative explanations for those 10 to 70 percent of diet-induced cancers, other than to suggest that overnutrition, physical inactivity, and obesity perhaps played a role.

Long story short: when scientists looked specifically for a causal link between fiber and cancer prevention or red meat and cancer causation, they found diddly-squat. Since these hypotheses were originally generated by weak epidemiological evidence, the contradictory evidence from more controlled trials weakens the hypotheses further. The refined carbohydrate hypothesis, on the other hand, provides considerable explanatory power and consistency with known biological properties of cancer, such as the necessity for cancer growth to be driven by insulin and a ready supply of glucose. The refined carb hypothesis also explains all of Dr. Miller's epidemiological observations. Given the above, it certainly seems more likely that, for instance, the absence of colon cancer in Cameroon has less to do with the presence of fiber than the absence of refined carbohydrates.

Finally, Dr. Miller appears to be significantly misinformed as to what is considered a "low carbohydrate diet". She generally uses the term "high protein diet", which underscores the root of the misunderstanding. The term "high protein diet" presumably indicates that it is low in both carbohydrates AND fat, which is problematic to health. One of my favorite books (which The Jungle Effect has inspired me to re-read) is Marvin Harris' Good to Eat. Harris notes that indigenous cultures never get the bulk of their calories from lean protein. Energy is invariably provided by fat and/or carbohydrate, with the amount of protein being remarkably constant across cultures. Dr. Miller correctly notes the underlying reason for this: protein is "dirty" fuel. Not being a pure hydrocarbon like sugar or fat, the conversion of protein to energy essentially results in pollution from nitrogen and other substances, which our kidneys then need to filter. A high-protein diet can overload the body's ability to dispose of these toxins, leading to sickness and ultimately death, even though plenty of food is provided. Dr. Miller relates experiences of some of her patients on high protein diets, that they basically felt unsatisfied and craved carbohydrates. This matches nicely with the phenomenon of rabbit starvation, where pioneers would feast on extremely lean rabbit meat, only to still be hungry. After some time they would eat 3 to 4 pounds of rabbit at a sitting, yet ultimately would waste away and essentially starve to death with full stomachs. It is no surprise that a high protein diet (as defined by restriction of both carbohydrate and fat) is doomed to failure.

But "low carbohydrate" does not necessarily imply "high protein". Indeed, had Dr. Miller read any of the large number of books on low carbohydrate diets, Googled the topic, or consulted with any number of experts, she would have found the recommendations are generally for high fat. This is at odds with the idea that fat is unhealthy, of course, so it may not be surprising that those adhering to current nutritional dogma would infer that any healthy diet must be low fat, so lowering carbohydrate leaves only raising protein. As discussed in the last post, our beliefs are always conditioned on other beliefs, and obviously placing undue weight on some supporting hypothesis leads to poor inferences. Even a moment's consideration of the Inuit diet, for example, would indicate the true nature of a healthy low carbohydrate diet.

Anyway, I could spend many more paragraphs discussing The Jungle Effect (I made a ton of notes while reading - something that the Kindle, for all its flaws, is good for). But I think you get the point. Hopefully Dr. Miller takes the time to read this review in the intended spirit, which is not to bash her work. I think some of what she espouses has value, and I love the idea of studying indigenous diets, provided it's done in an appropriately broad context. But the conclusions one draws from this study need to be consistent with the all of the actual relevant scientific evidence, not just the arbitrary socially-driven beliefs that form "consensus". Otherwise you risk coming to unjustifiable and/or inconsistent conclusions and sub-optimal recommendations, forcing the addition of ad hoc hypotheses, artificial dietary rules, etc. As I've said, changing lifestyles is hard enough without a lot of extra rules to follow. A healthy diet should be easy. Making it hard and motivated by inconsistent rationales just reduces the chances that people will actually make the change and improve their health.

Monday, October 6, 2008

Information, Knowledge, and Wisdom

There are three kinds of lies: lies, damned lies, and statistics.
Benjamin Disraeli, British politician (1804 - 1881)


Statistics: The only science that enables different experts using the same figures to draw different conclusions.
Evan Esar, Esar's Comic Dictionary, American Humorist (1899 - 1995)


Where is the knowledge that is lost in information? Where is the wisdom that is lost in knowledge?
T.S.Eliot (1888 - 1965)

As discussed in the last post, we are today faced with a dizzying array of contradictory "recommendations" when it comes to health and lifestyle, particularly what to eat. How is it possible that all of these experts come to such differing conclusions? The quotes above illustrate certain root aspects of the problem. First, there is a gross misunderstanding of what "statistics" really is meant to accomplish, how it is properly applied, and what the answer "means". Second, and at least partly because of the first issue, the vast quantities of hard information ("facts" or "data") we have available tend to get filtered and twisted to erroneous conclusions, often supporting goals (e.g. "sell more books") other than maximizing the health of the population. Finally, despite the apparent rigor, technology, and expertise applied for answering key scientific questions, the process of actually turning scientific results into useful decisions is generally an exercise muddled thinking rather than rational inference.

The first two quotes embody general perceptions about "statistics". Those holding these views, by the way, include most professional scientists. When I worked as a research scientist in gamma-ray astrophysics, I can't tell you how many times colleagues would say things like "You can get any answer you want using statistics". Of course, they were quite happy to (supposedly) get the answers they wanted via application of statistical methods, but that irony isn't the point, because the incontrovertible truth is precisely opposite: there's only one right answer. Esar's quote embodies this attitude, so let's pick it apart. First, statistics is not a "science". Science involves the broader exercise of observing, modeling using mathematics, and interpreting observations in terms of those models. Statistics is just math, and as in any well-posed mathematical problem, there's only one right answer. I've thought of only three ways in which two scientists could come up with different answers to the same statistical problem:
  1. Somebody made a math error (happens more often than you think, see Point 3).
  2. They used different input information (in which case it's not really the same problem in both cases).
  3. They used different approximate methods (often badly) to solve the problem.
You may remember learning statistics in school. Most people hate it, because by and large, it doesn't really make any sense. "Statistics" as it is usually thought of consists of a big cookbook, recipes for diddling around with "data" and turning them into some small set of numbers called "statistics", like mean and variance. But there's no core concepts tying the recipes together, just lots of rules like "if the data looks like this and you want to ask this question, then use such-and-such technique". What you may not remember (or even have been taught) is that "such-and-such technique" is usually not an exact mathematical result, but only a reasonable approximation when the "if" part of the rule is close to being true. This part is forgotten far too often by people who should know better. For instance, many statistical tests only become exact in the limit that the number of degrees-of-freedom (# of data points minus # of model parameters) goes to infinity. If such tests are to be good approximations, you need lots of degrees of freedom, but I've seen professional scientists in peer-reviewed publications blithely apply those tests only valid in the infinite limit to a model with three parameters, and data containing only five points. Last time I checked, "5-3=2", and "2" is not a particularly good approximation of "infinity".

But the problem runs even deeper, because even when scientists do the math right and apply the recipes under the appropriate conditions, more often than not they're still not answering the right question. Pure science attempts to answer questions like "does eating lollipops induce insulin resistance". Applied science wants to actually make decisions, e.g. "should I eat this lollipop, given what I know about it's effects on insulin resistance, the effects of insulin resistance on health?" etc. Almost always, the answers given in scientific papers are of the form "We compute a 95% probability that we would have observed these data if lollipops cause chronic insulin resistance". That's a much different statement than "We compute a 95% chance that lollipops chronically increase insulin resistance given the data we observe and other prior information," and only this latter statement is of any use on the applied end of things, when deciding whether or not to eat lollipops.

A very simple example might help to illustrate the issues. Suppose somebody wants you to gamble on a coin flip: you bet $1, and if the coin comes up heads, you win your dollar plus another $2. If the coin comes up tails, you lose your $1. How do you decide whether or not to play this game? Right from the start, we can see standard statistics is going to have trouble, because you have no data. Intuitively you might guess that there is a 50% chance of heads, and indeed if you had no other information at all, you would be right. With two possibilities, and no information to distinguish which would be more likely, you would assign equal probabilities to both outcomes. Our decision to play or not comes down to how much money we'd expect to have in each case. If we don't play, we have a guaranteed $1 in our pocket. If we do play, then with no other information there's a 50% chance we'll have $3, and a 50% chance we'll have zero, so $3*0.5 + $0*0.5 = $1.50. On average, we have $1.50 if we play and $1 if we don't, so we should play, given that we lack any other information about the game.

But what if our information were different? For example, suppose somebody we trust tells us that she knows the coin-flip guy is a scam artist, having been arrested several times. Now what do you do? Most people would now intuitively keep their money, and indeed a mathematical analysis would likely indicate that this is the proper course of action, as on average you would now expect to lose more often than not. But notice that the only thing that has changed is our knowledge about the game: the coin being flipped is the same, as is the person doing the flipping, and presumably the laws of physics governing coin flips. Purposely ignoring this new information would be fantastically stupid, particularly given that it came from a trusted source. Note that we still have no "data" in the traditional sense.

What if we had some data? Suppose the person we're playing against offers to let us flip the coin one hundred times before betting, and 99 out of 100 come up heads. Now you have some more information about the coin. Assuming that you're not doing something to introduce a bias, you have some additional confidence that the coin itself is biased towards heads, even more inducement to play the game, because the likelihood that you would have observed 99 heads out of 100 flips would be low for a fair coin. But that data and the associated likelihood are only part of the picture: do you now ignore the input of your trusted friend? Does your data trump the information they provided? Obviously it cannot. The aforementioned likelihood of seeing 99 of 100 flips come up heads was calculated with the a priori assumption that the game was fair. But your friend's input tells you that fairness is unlikely, and given your other information about scam artists, the likelihood that you saw 99 heads when you were flipping the coin and no money was at stake should be considerably higher. And the likelihood of 99 heads answers the wrong question anyway. Our decision to play or not must be based on the probability that the coin, when flipped by our opponent, will come up heads, not on the probability that you would have flipped 99 heads in 100 tries assuming the coin was fair.

The key point here is that information is information is information. The data gathered in a particular experiment is just more information which can be used to update our beliefs in different hypotheses. But most experiments don't start from zero, where the gathered data is the only available information. Usually others have conducted experiments that gathered other data. There's generally other relevant information as well. In the lollipop/insulin resistance example, we know that the glucose in the lollipop raises insulin, and that the fructose may at least temporarily contribute to insulin resistance. Any reasonable analysis must include this additional information when evaluating our belief in the hypothesis under test ("lollipops induce chronic insulin resistance"). Ignoring this information is no different than arbitrarily excluding data from our analysis (after all, data is just a kind of information); yet this is precisely how most scientific results are presented and interpreted.

Have a headache yet? I know, I know, this is some tough material. The whole area of reasoning under uncertainty is mathematically and philosophically deep. This is about the fourth time I've tried writing a reasonably accessible post, and have concluded that it's fundamentally hard to talk about. But if you're going to make good decisions about your health, it's good to have at least some idea where the flaws are in most scientific analyses, and also how to think about the issues in the "right way" so as not to be misled. So let's take a moment to review the key lessons you should take away at this point:
  1. Though I didn't explicitly say it above, the notion of a probability really reflects the degree of belief in some statement (e.g. "the next coin flip will be heads"). Probabilities are real numbers between 0 and 1 (or 0% and 100%), where 0 represents absolute belief that the statement is false, and 1 absolute belief that it is true.
  2. We don't necessarily need "data" to assess the probability that a statement is true, any type of information will do. There's nothing special about data, they're just more information to be used in updating degrees of belief (probabilities).
  3. To properly assess the probability of a hypothesis, we must include not just the data, but also any other relevant background information.
  4. If the outcome of a decision you're making depends on a hypothesis being true, then you need to know the probability of that hypothesis being true given all of the relevant available information. The probability that some particular data would have been observed assuming the hypothesis to be true necessarily ignores relevant information, precisely because it assumes the truth of the hypothesis without accounting for the possibility that the hypothesis is false. This is impossibly circular: you can't assess the degree of belief in a hypothesis if your analysis uniformly assumes it to be true.
Hopefully you can start to see why nutritional science and the associated recommendations are all over the place. In practice, scientists very often
  • Selectively ignore prior information;
  • Misapply statistical approximations to calculate the wrong number (probability of data assuming hypothesis is true);
  • Interpret their results as indicating absolute truth or falsehood;
  • Perform this interpretation via vague mental gymnastics rather than rigorous mathematics.
This situation is all the more vexing in the Information Age, where everyone basically has access to the same set of information. Thus all scientists can quickly call up published papers, experimental databases, etc. The input information is effectively the same for everybody, yet the output conclusions are nearly as numerous as scientists themselves. But looking at the four bullets above, we see ample opportunity to create divergent conclusions and recommendations with no hope of reconciling them. The last two are particularly troubling in the context of nutrition, because there's no hope of making decisions of what to eat when conflicting hypotheses are presented in terms of absolute truth/falsehood, with little visibility as to the actual mental manipulations that go into making that assessment. Lesson 4 above tells us that we need a probability of truth in order to make a decision, because we have to evaluate the expected outcome accounting for the possibility that the relevant hypothesis(es) may be false. Go back to the coin-flip game. Suppose we were forced to assert with absolute certainty whether the next flip would be heads. We get a different a different decision for "true" than for "false"; but we don't actually know what the outcome of the flip will be, and there can only be one right decision in terms of maximizing our expected winnings given our information about the uncertain outcome of the coin flip.

Now it may sound as if the picture is bleak for science, but rather amazingly, science seems to eventually bumble around to the correct conclusions. It's just a highly inefficient process because of the issues above. Scientists tend to hold on to certain "widely believed" hypotheses like grim death regardless of the actual evidential support; but eventually there comes a point when evidence for an alternative hypothesis becomes so overwhelming it becomes impossible to ignore (if you're paying attention, you can watch this process at work right now for low-carb diets). Science would benefit greatly, of course, by adopting a more rigorous analytical approach addressing the issues above. Such an approach exists, generally denoted "Bayesian Statistics". I don't like this term, since the methodology neither focuses on "statistics" per se (rather on probabilities), and it's namesake the Rev. Thomas Bayes really made only a tangential contribution to the whole business. "Probability Theory" is a more apt term, reflecting the idea that it extends the idea of logic to the case where we're not 100% sure of the truth/falsehood of statements.

At the end of the post, I'll briefly discuss Probability Theory further and give a few references for those who are interested in the technical details. But for those just trying to puzzle through the maze of information presented by the media, doctors, etc. we can borrow some of the ideas from Probability Theory, putting together a way of thinking about evidence (information), hypotheses (knowledge), and decisions (wisdom). The T. S. Eliot quote at the top describes the situation we wish to avoid, one that many people experience now, struggling to make wise choices when faced with an avalanche of information and knowledge from different sources.

So let's see how we can apply the four lessons above in everyday thinking.
  1. Probabilities are just numbers representing degrees of belief. I'm not suggesting you carry a bunch of numbers around in your head to track your beliefs, but do recognize that most ideas are neither absolutely true nor absolutely false. We intuitively recognize that such absolutism is pathological, as seen by the often bizarre irrationality exhibited by dogmatists, who refuse to move from a position regardless of the weight of the evidence against that position. Probability Theory encapsulates that behavior mathematically. When new evidence is introduced, Probability Theory gives a formula for updating your beliefs (see math below), basically multiplying your current probability by the weight of that new evidence. But zero times anything is zero, i.e. if you were absolutely sure your current idea was right and all others were wrong, no amount of evidence would ever change your probability. So make sure you are always flexible in reassessing your beliefs. Mental discipline is required. The brain's natural tendency is to seek absolutes, as exhibited by the phenomenon of cognitive dissonance. Learn to be comfortable with uncertainty. Decisions can still be made in the absence of certainty; as Herdotus said, "A decision was wise, even though it led to disastrous consequences, if the evidence at hand indicated it was the best one to make; and a decision was foolish, even though it led to the happiest possible consequences, if it was unreasonable to expect those consequences."
  2. Just because you're not a scientist (or even if you are) and don't have detailed access to scientific data, it does not mean you can't weigh evidence and update your beliefs. Information is information is information, whether its numbers or a brief newspaper story. The trick is in getting the weight in the right ballpark. A good rule of thumb: individual reports or results generally should not sway your belief very much. Strong belief is usually built on multiple independent results from different sources.
  3. Be sure to include all of the information you have available. Another manifestation of cognitive dissonance: when presented with evidence contradicting a strong belief, we give it zero weight. That's a mistake. Contradictory evidence should lessen your belief at least a little, like it or not. Do include evidence from all sources, including anecdotal and personal experience. Just be careful not to overweight that evidence. Be aware that truth is usually conditional. Take the following hypothesis: "You can't become obese on a zero-carb diet." The truth of that hypothesis is conditional on other hypotheses, e.g. "Insulin is the hormone governing fat storage" and "Insulin is primarily driven by carbohydrate consumption". Changes in the belief of these supporting hypotheses necessarily changes the belief in the main hypothesis, for example knowledge of the ASP pathway for fat storage changes our belief that insulin runs the show, and hence modifies our belief that zero-carb diets make you immune to obesity.
  4. We saw at the beginning that there are only three ways that scientists should disagree when assessing hypotheses. Adapting that to mental inference, disagreement implies that one or both people are irrational and/or have different information. Don't waste your time with arguing irrational people. Anybody who says things like "we'll have to agree to disagree" is irrational, because they have no information supporting their position and/or are unwilling to accept information that may modify their beliefs. But if you find yourself in disagreement with someone who seems rational, then engage in discussion to share the differing information that is at the root of your disagreement. You may not come to agreement - it's difficult to extract all relevant information and knowledge from somebody's head - but you at least will likely learn something new.
  5. Decisions require not only the quantification of information as probabilities (or at least some qualitative mental equivalent), but also a clearly defined goal. The goal in our coin-flip game was straightforward: on average, maximize the amount of money in your pocket. It's not so easy to quantify the goal of maximizing health. People try, which is why doctors love to measure things like cholesterol and blood sugar, but such metrics can only provide a narrow view of one particular aspect of overall health (and even if they didn't, treatment decisions are generally not properly analyzed anyway). Treatment decisions often involve modification of one or a small set of such numbers, which is incredibly myopic as it ignores overall health (hence the spectacular failure of "intensive therapy" to lower blood sugar in Type II diabetics by pumping them full of insulin). Remember also to include the potential long-term effects of your decisions, e.g. cranking up the insulin of those Type II diabetics lowers blood-sugar in the short-term, but increases probability of early death, which presumably outweighs the short-term benefits.
The above is sort of a loose mental approximation to Probability Theory and Decision Theory. Those doing actual scientific research should be doing inference within the rigorous mathematical framework. I want to briefly discuss this, and I'll provide a few references as well. A full discussion of the subject would (and does) fill one or more large books; yet the conceptual basis is fairly simple, so I'll focus on that. The important thing to remember here is this: it's just math. There are a small number of concepts that we must accept axiomatically, and everything else follows mathematically. Arguments against the use of Probability Theory for scientific inference must necessarily focus on the fundamental concepts, because everything else is a mathematically rigorous result following from those concepts, listed below:
  1. Degrees of belief (probabilities) are represented by real numbers.
  2. Qualitative correspondence with common sense, e.g. if your belief in some background information increases (e.g. "Coin-flip guy is cheating") then so should your belief in a hypothesis conditioned on that information ("I will lose the coin flip when coin-flip guy does the flipping").
  3. The procedure for assessing degrees of belief (probabilities) must be consistent, where consistency can be described in three ways:
    1. If a conclusion can be reasoned out in more than one way, then all ways must lead to the same answer.
    2. Conclusions must be reached using all of the available evidence.
    3. Equivalent states of knowledge lead to the same probabilities based on that knowledge.
That's it. The whole of Probability Theory follows from these ideas, which seem to form a sensible and complete set of principles for scientific inference.

To make use of Probability Theory, we need some mathematical rules for manipulating the probabilities of different propositions. A little notation first: let A|C mean "A is true if C is true". AB|C means "A and B are true given C", while A+B|C means "A or B is true given C". Let ~A|C mean "A is false given C". If p(A|C) denotes the probability that A is true given C, then we have the following product and sum rules:
  • p(AB|C) = p(A|C) p(B|AC) = p(B|C) p(A|BC)
  • p(A + B|C) = p(A|C) + p(B|C) - p(AB|C)
From the product rule, it follows that absolute certainty of truth must be represented by the value 1, since 1 times 1 equals 1. Further, since A and ~A cannot be simultaneously true, we wind up with 0 representing absolute certainty of falsehood, e.g. if p(A|C) = 1, then p(~A|C) = 0. It can be proven that these rules are uniquely determined, assuming probabilities are represented by real numbers (Concept 1) and structural consistency (Concept 3.1).

That's most of Probability Theory, IMHO far more conceptually elegant and mathematically simple than the mess of statistics most of us were taught. That's not to say that actually solving problems is necessarily easy, but with a sound conceptual basis and simple rules, it's a lot easier to solve them consistently, get numbers that actually make sense, and combine different scientific results to understand their impact on various hypotheses.

This last point is important. We discussed earlier how new information ("data") must be used to update our beliefs. We shouldn't look at two different scientific results and try to pick between them. Rather our belief in a hypothesis derived from the first result must be adjusted when we get the second result. Try figuring out how to do this using standard statistics. The Probability Theory recipe for this follows trivially from the product rule. Let's rewrite the second equality in the product rule as follows:
  • p(H|DI) p(D|I) = p(D|HI) p(H|I)
where "H" represents a hypothesis being tested, "D" some observed data, and "I" our prior information (which might include results of other experiments, knowledge of chemistry, etc.) Each term represents a different proposition:
  • p(H|DI) : The probability that our hypothesis is true, given the observed data AND background information. This is called the posterior, and is the key quantity for scientific inference and decision-making.
  • p(D|I) : The probability that we would have observed the data given the background information independent of the hypothesis, called the evidence.
  • p(D|HI) : The probability that we would have observed the data given both the hypothesis AND the background information, called the likelihood.
  • p(H|I) : The probability that the hypothesis is true given only the background information, denoted the prior.
With a single algebraic step we can solve for the posterior, which is the quantity of interest:
  • p(H|DI) = p(D|HI) p(H|I) / p(D|I)
So we now have a very simple recipe for updating our beliefs given new data. I find it to be intuitively nice and tidy: to update your new belief from the old, multiply by the ratio of the likelihood (probability for measuring the data given the hypothesis and other information) to the evidence (probability that you would have seen the data in any case). If your hypothesis increases the probability of obtaining that dataset, then your belief in that hypothesis increases accordingly, and vice versa. If your hypothesis tells you nothing about the data, then the ratio is 1, and your belief does not change.

This formula goes by the name of Bayes' Theorem, so named for the Rev. Thomas Bayes who originally derived a form published in a posthumous paper in 1763. The version shown above was actually published by Laplace in 1774, so we see these ideas have been around for awhile. The power of Bayes' Theorem is hopefully clear: given some prior probability, i.e. our degree of belief in a hypothesis, we know how to update the probability when new data is observed, independent of how we arrived at our prior probability. So no matter what experiments I did (or even if no experiments have been done) to arrive at p(H|I), I can simply update that belief given my new data. Note that the term usually reported in scientific results is the likelihood, which is only part of the story.

If you've ever looked at a "meta-analysis", where somebody tries to combine results from many different experiments, you may have noted that it involves a lot of statistical pain, and often includes cutting out some results (e.g. favoring clinical over epidemiological studies), which violates the whole idea of using all available information. This sort of combination would be straightforward using Probability Theory, presuming all of the original results to be combined were also derived with Probability Theory. No reason to leave out some of the results due to "lack of control". A proper Probability Theory treatment would, for example, quantitatively account for the large number of "uncontrolled variables" (which really implies a lack of information connecting cause and effect) in a population study and adjust the probabilities accordingly.

Now, the few of you who have actually made it this far may be wondering why, if Probability Theory is so much better than standard statistics, is it not widely applied? As with many such situations in science, the answer is complicated, and at least partly tied up with human psychology and sociology. You can read about it more in the references, but I'll hit a few high points. It is interesting to note that Probability Theory was accepted and used prior to the mid-19th century or so. Laplace, for example, used it to estimate the mass of Saturn with considerable accuracy, so much so that an additional 150 years of data only improved the result by only 0.63%. Despite this, there were some technical problems. One is that the mathematical equations arising from application of Probability Theory can be difficult or impossible to solve via pencil and paper. This is largely alleviated by using computers to do the calculations, but 19th century scientists did not have that option.

There were also philosophical issues. Nineteenth-century thinkers were pushing toward the idea that there existed some some sort of objective scientific truth independent of human thought. The idea that probabilities represented degrees of belief was apparently too squishy and subjective, so they adopted the idea "let the data speak for themselves", and that probabilities reflected the relative frequencies of different measured outcomes in the limit of an infinite number of observations. So if you flip a fair coin infinitely many times, exactly half of the outcomes (50%) would be heads. At the core of the philosophical disagreement lay a couple of technical difficulties. First, there was no known reason to accept the sum and product rules as "right" within the context of Probability Theory (one could propose other rules), yet they arose naturally from the frequency interpretation (Cox later showed the rules could be uniquely determined assuming the basic concepts of Probability Theory). Second, Bayes' Theorem tells us how to update our beliefs given data. But if you "peel the onion" so to speak, going back to the point before any data had been collected, how do you assign the prior probability p(H|I)?

This proved to be a sticky problem. Special cases could be solved, e.g. it's clear that for the coin-flip problem with no other information that you should assign 50%/50%. But for more complicated problems where one had partial information, no general method existed for calculating a unique prior. It wasn't until the 1950's that physicist Edwin Jaynes successfully addressed this issue, borrowing ideas from information theory statistical physics. Jaynes introduced the idea of Maximum Entropy, which basically told you to assign probabilities such that they were consistent with the information you had, while adding no new information (information theory tells you how to measure information in terms of probabilities; entropy is just a measure of your lack of information). The underlying arguments are deep, stemming from the idea of Concept 3.3 that equivalent states of knowledge represent a symmetry, and that your probability assignments must reflect that symmetry. To do otherwise would be adding information without justification. But the horse was out of the barn at that point. The frequency approach had been used in practice for decades, and even in the 50's the computing technology required for practical widespread use of Probability Theory did not exist.

Today, of course, computers are cheap and ubiquitous, and indeed the use of Probability Theory is beginning to increase. But the progress is slow, and as is often the case, widespread change will require the next generation of scientists to really grab the idea and run with it while the current generation fades away.

Whew, that was quite the marathon post. I've hardly done the topic justice, but hopefully you at least got some ideas about what's wrong with how scientific inference is presently done, how you can avoid being confused by apparently conflicting results, and where the solution lies. Below are the promised references.
  • Probability Theory: The Logic of Science, E. T. Jaynes: The "bible" of Probability Theory. Jaynes was perhaps the central figure in the 20th century to advance Probability Theory as the mathematical framework reflecting the scientific method. This book is jam-packed with "well, duh" moments, followed by the realization that almost everyone in science reasons in ways which range from unduly complex and opaque to mathematically inconsistent. Not an easy book to read, full of some difficult math, but also plenty of conceptual exposition and very clear thinking about difficult topics. Jaynes does tend to rant a bit at times, but usually against determined stupidity. Required reading for all scientists, and anybody who needs to make critical decisions in the face of incomplete information.
  • Articles about probability theory: an online collection, including the works of Jaynes. You can download the first three chapters of "Probability Theory" in case you want a taste before plunking down 70 bucks. I particularly like this article detailing the historical development.
  • Data Analysis: A Bayesian Tutorial, D. Sivia and J. Skilling: A more pithy presentation aimed at practitioners. Clearly written without being too math-heavy, "Data Analysis" hits the high points and illustrates some key concepts with real-world applications. A good place to get your feet wet before tackling the intellectual Mt. Everest of Jaynes' book.