This is a draft of a subsection that I have written for an upcoming review paper on the ketogenic diet. This is comprehensive look at what we know about the usefulness of the ketogenic diet for the treatment of overweight and obesity. The review of which this subsection is a part will be published later this year.

Without further ado…

Ketogenic diets show appetite suppression

The most well-known application of ketogenic dietary therapy (KDT) is for weight loss. This form of KDT popularly takes the form of a diet high in meat, eggs, dairy, high-fat nuts, non-starchy vegetables, and restricted in foods containing carbohydrate (such as most fruits, grains, legumes, etc.) (Volek, Phinney, Kossoff, Eberstein, & Moore, 2011). Less commonly, plant-based versions of this diet rich in seeds, nuts, and oils—so-called Eco-Atkins—are also possible (Jenkins et al., 2014). Acute dosing studies using exogenous ketones suggest that ketones may cause appetite suppression owing to lower ghrelin secretion peripherally (possibly via activation of GPR41 on enteroendocrine cells), as well as direct effects on the brain (Stubbs et al., 2018). One study investigating the changes caused by the classical KD for refractory epilepsy in children have found an even more robust chronic reduction in ghrelin (Marchiò et al., 2019). Still another study showed that, after 13% weight loss over 8 weeks in 39 individuals, reintroduction of a carbohydrate containing diet over two weeks caused increases in ghrelin and hunger to above baseline, whereas prior to reintroduction, ghrelin and hunger had remained suppressed at baseline levels, defying typical changes in ghrelin during weight loss (Sumithran et al., 2013). Correspondingly, systematic review and meta-analysis suggests a modest reduction in self-reported hunger and increase in self-reported fullness and satiety during adherence to the ketogenic diet compared to baseline, pre-diet levels (Gibson et al., 2015).

Ketogenic diets may or may not provide a metabolic advantage

Findings from a recent Mendelian randomization study have further suggested that reduction in post-prandial plasma insulin levels as achievable by KD (Hall et al., 2016) might be capable of reducing the prevalence of between 1 and 10% of obesity at the population level (Astley et al., 2018). Consistent with this, (Ebbeling et al., 2018a, 2012) reported a “metabolic advantage” of isocaloric carbohydrate restriction that may substantially increase energy expenditure. However, in a widely circulated critical re-analysis by Kevin Hall and Juen Guo available in pre-print (Hall & Guo, 2019), the latest of these findings have been contested on a number of technical grounds. Moreover, as shown by recent meta-analysis, the findings of these studies (Ebbeling et al., 2018b, 2012) are themselves extreme outliers among more than 30 similar controlled feeding studies, which on average show a slight metabolic advantage in fact for low-fat diets (Hall & Guo, 2017).

Twelve-month weight loss: low-carb vs. low-fat diets

More importantly, the highest-quality reviews of free-living trials comparing 12-month outcomes of low-carbohydrate versus low-fat diets in free-living conditions show negligible weight loss difference—less than a kilogram—which might itself be fully accounted for by the glycogen- (and thus water-) depleting effect of the diet (Churuangsuk, Kherouf, Combet, & Lean, 2018). That the above mechanistic advantages do not seem to translate into a clinically substantial weight loss advantage for the KD may be indicative that such advantages are short-lived (Stubbs et al., 2018) or that other factors, such as socioeconomics, social support, and other life circumstances (Hall, 2018), in the context of chronic hyperpalatable food cue exposure (Lutter & Nestler, 2009), are more biologically important. Indeed, according to a recent study, among the high-quality systematic reviews with meta-analyses on low-carbohydrate diets (Churuangsuk et al., 2018), only one used for its study inclusion criteria a carbohydrate intake sufficiently low to produce ketonemia (Bueno, de Melo, de Oliveira, & da Rocha Ataide, 2013), and in only one RCT of the included 13 were low-carbohydrate dieters still in the ketogenic range of carbohydrate intake by study end, with a 59% completion rate (Brinkworth, Noakes, Buckley, Keogh, & Clifton, 2009). The DIETFITS trial, which had 609 participants and a 79% completion rate, had subjects start well into the ketogenic range (<20g) in the low-carbohydrate group for 8 weeks, adding carbohydrates back to their diets in increments of 5-15g/week “until they reached the lowest level of intake they believed could be maintained indefinitely”. By the twelfth week, just four weeks later, dieters were on average consuming nearly twice the carbohydrate grams as is generally recommended to maintain nutritional ketosis, and by twelve months, nearly triple (Gardner et al., 2018).

Adherence is the central issue for all diets, including the ketogenic diet

Indeed, as with other weight loss regimens, studies on low-carb diets show, on average, the start of progressive weight regain between 6 and 12 months (Athinarayanan et al., 2019; Hall, 2018), with the subjects exiting the ketogenic carbohydrate intake range earlier (Hallberg et al., 2018). Accordingly, while at 6 months the ketogenic diet shows a weight loss advantage, at 12 months the results for the KD are similar to those for other well-formulated diets that attempt to exclude hypercaloric, high-reward, low-satiety foods, and there is little or no significant detectable difference in 12-month weight loss in the best designed studies and highest quality reviews (Churuangsuk et al., 2018; Gardner et al., 2018), and the postulated mechanisms above of appetite suppression and metabolic advantage may not be clinically relevant for this reason, or if they are, they may only be relevant in marginal cases not yet captured by the RCT literature. It is possible that in the context of an adequately characterized and implemented behavioral modification intervention, the above-postulated mechanisms may prove to make ketogenic diets superior to other approaches. However, what is most striking about the literature given current crude approaches to behavioral modification is that the macronutrient composition—beyond a focus on whole foods—is relatively insignificant a factor for determining the degree of long-term weight loss.

Conclusion: leveraging ketogenesis for weight loss will require addressing the adherence problem

Astley, C. M., Todd, J. N., Salem, R. M., Vedantam, S., Ebbeling, C. B., Huang, P. L., … Florez, J. C. (2018). Genetic evidence that carbohydrate-stimulated insulin secretion leads to obesity. Clinical Chemistry, 64(1), 192–200.

Athinarayanan, S. J., Adams, R. N., Hallberg, S. J., McKenzie, A. L., Bhanpuri, N. H., Campbell, W. W., … McCarter, J. P. (2019). Long-Term Effects of a Novel Continuous Remote Care Intervention Including Nutritional Ketosis for the Management of Type 2 Diabetes: A 2-Year Non-randomized Clinical Trial. Frontiers in Endocrinology, 10, 348.

Brinkworth, G. D., Noakes, M., Buckley, J. D., Keogh, J. B., & Clifton, P. M. (2009). Long-term effects of a very-low-carbohydrate weight loss diet compared with an isocaloric low-fat diet after 12 mo. The American Journal of Clinical Nutrition, 90(1), 23–32.

Bueno, N. B., de Melo, I. S. V., de Oliveira, S. L., & da Rocha Ataide, T. (2013). Very-low-carbohydrate ketogenic diet v. low-fat diet for long-term weight loss: a meta-analysis of randomised controlled trials. British Journal of Nutrition, 110(7), 1178–1187.

Churuangsuk, C., Kherouf, M., Combet, E., & Lean, M. (2018). Low-carbohydrate diets for overweight and obesity: a systematic review of the systematic reviews. Obesity Reviews, 19(12), 1700–1718.

Ebbeling, C. B., Feldman, H. A., Klein, G. L., Wong, J. M. W., Bielak, L., Steltz, S. K., … Ludwig, D. S. (2018a). Effects of a low carbohydrate diet on energy expenditure during weight loss maintenance: randomized trial. BMJ (Clinical Research Ed.), 363, k4583.

Ebbeling, C. B., Feldman, H. A., Klein, G. L., Wong, J. M. W., Bielak, L., Steltz, S. K., … Ludwig, D. S. (2018b). Effects of a low carbohydrate diet on energy expenditure during weight loss maintenance: randomized trial. BMJ (Clinical Research Ed.), 363, k4583.

Ebbeling, C. B., Swain, J. F., Feldman, H. A., Wong, W. W., Hachey, D. L., Garcia-Lago, E., & Ludwig, D. S. (2012). Effects of Dietary Composition on Energy Expenditure During Weight-Loss Maintenance. JAMA, 307(24), 2627–2634.

Gardner, C. D., Trepanowski, J. F., Del Gobbo, L. C., Hauser, M. E., Rigdon, J., Ioannidis, J. P. A., … King, A. C. (2018). Effect of Low-Fat vs Low-Carbohydrate Diet on 12-Month Weight Loss in Overweight Adults and the Association With Genotype Pattern or Insulin Secretion. JAMA, 319(7), 667.

Gibson, A. A., Seimon, R. V., Lee, C. M. Y., Ayre, J., Franklin, J., Markovic, T. P., … Sainsbury, A. (2015). Do ketogenic diets really suppress appetite? A systematic review and meta-analysis. Obesity Reviews, 16(1), 64–76.

Hall, K. D. (2018). Maintenance of Lost Weight and Long-Term Management of Obesity. Medical Clinics of North America, 102(1), 183–197.

Hall, K. D., Chen, K. Y., Guo, J., Lam, Y. Y., Leibel, R. L., Mayer, L. E., … Ravussin, E. (2016). Energy expenditure and body composition changes after an isocaloric ketogenic diet in overweight and obese men. The American Journal of Clinical Nutrition, 104(2), 324–333.

Hall, K. D., & Guo, J. (2017). Obesity Energetics: Body Weight Regulation and the Effects of Diet Composition. Gastroenterology, 152(7), 1718-1727.e3.

Hall, K. D., & Guo, J. (2019). Carbs versus fat: does it really matter for maintaining lost weight? BioRxiv, 476655.

Hallberg, S. J., McKenzie, A. L., Williams, P. T., Bhanpuri, N. H., Peters, A. L., Campbell, W. W., … Volek, J. S. (2018). Effectiveness and Safety of a Novel Care Model for the Management of Type 2 Diabetes at 1 Year: An Open-Label, Non-Randomized, Controlled Study. Diabetes Therapy, 9(2), 583–612.

Jenkins, D. J. A., Wong, J. M. W., Kendall, C. W. C., Esfahani, A., Ng, V. W. Y., Leong, T. C. K., … Singer, W. (2014). Effect of a 6-month vegan low-carbohydrate (‘Eco-Atkins’) diet on cardiovascular risk factors and body weight in hyperlipidaemic adults: a randomised controlled trial. BMJ Open, 4(2), e003505.

Lutter, M., & Nestler, E. J. (2009). Homeostatic and Hedonic Signals Interact in the Regulation of Food Intake. The Journal of Nutrition, 139(3), 629–632.

Marchiò, M., Roli, L., Lucchi, C., Costa, A. M., Borghi, M., Iughetti, L., … Biagini, G. (2019). Ghrelin Plasma Levels After 1 Year of Ketogenic Diet in Children With Refractory Epilepsy. Frontiers in Nutrition, 6, 112.

Stubbs, B. J., Cox, P. J., Evans, R. D., Cyranka, M., Clarke, K., & de Wet, H. (2018). A Ketone Ester Drink Lowers Human Ghrelin and Appetite. Obesity, 26(2), 269–273.

Sumithran, P., Prendergast, L. A., Delbridge, E., Purcell, K., Shulkes, A., Kriketos, A., & Proietto, J. (2013). Ketosis and appetite-mediating nutrients and hormones after weight loss. European Journal of Clinical Nutrition, 67(7), 759–764.

Volek, J., Phinney, S. D., Kossoff, E., Eberstein, J. A., & Moore, J. (2011). The art and science of low carbohydrate living : an expert guide to making the life-saving benefits of carbohydrate restriction sustainable and enjoyable. Retrieved from

1. If a quack is promoting an underused but scientifically legitimate therapy and helping people, critics have the moral responsibility to promote this intervention themselves or they do not have the right to criticize the quack about what the quack says about the intervention. This applies even if the quack’s explanation of the science is wrong.

2. Nobody has the moral responsibility or right to withhold scientific information from the public just because it can be abused by quacks.

3. However, scientists and other public figures have the moral responsibility to denounce the misuse by prominent quacks of the science they have produced.

4. Scientists and other public figures who associate with and support quacks share moral responsibility in the misinformation and harm these quacks cause.

* A note on quacks: A quack is someone who makes a livelihood from exaggerated or false claims about what the science says about health. Quacks can include academics, activists, popular media figures, university press release office workers, etc., and is not limited to the traditional figure of the “snake oil salesman”.

A dream

I wake up at 7AM, alarm clock blaring. As I get to my feet, I look around frantically. Just a moment earlier, I had been hunched in a bunker, preparing for rocket launch as the bomb sirens blared.

I had been dreaming.

Blinking, I realize I am in my bedroom. And the rocket sirens? My alarm clock.

My tensed shoulders relax and I exhale.

After going to the bathroom and grabbing coffee, I sit down at my computer, beginning my morning ritual of checking Twitter–and my Oura ring’s sleep tracking data.

Throughout the entire sleep cycle, the Oura ring had been tracking every heartbeat and every hand movement. And because the heart’s activity is modulated by the vagus nerve, so the theory goes, the Oura ring can track brain activity by tracking heart activity. And according to company claims, by tracking the brain’s activity via heart activity, it can track sleep stages.

A nightmare

As I look down at my Oura ring’s data about my sleep stages on my smart phone, I see a depressingly familiar sight:

  • A sleep-scape pockmarked by white spikes, indicating night-time waking events–I have been nearly 2 hours awake while I had thought I had been sleeping
  • 16 minutes of total REM sleep out of more than eight hours in bed “asleep”
  • An impressive, nearly 3 hours of deep sleep. Well that’s nice at least

Great. I have a serious sleep disorder.

Then I looked at my overnight heart rate:

Not bad, except for the frequent gigantic spikes. (My wife claims that I often engage in somnolent, heated invective against dream opponents.)

Was that me thrashing about? Awake or asleep? Was I punching the air in the face? I muse.

Then I remember my dream and see that while I was dreaming, the ring recorded me as awake.

Hmm, I think, my brow furrowed.

This calls for PubMed

So I start trawling through PubMed. Here is what I found.

The Oura ring has been compared to polysomnography–the gold standard in sleep staging. While the company boasts that the ring is “scientifically validated” for sleep staging, we should use that term rather loosely. Scientifically validated just means scientifically studied. It actually pretty much sucks for sleep staging.

Here is a graph from a “validation” paper (link):

On the X-axis is the gold standard of polysomnography, and on the Y-axis is the deviation. N3 is deep sleep, and REM is, well, REM. What we see for N3 deep sleep is a nearly 200 minute range of deviation around the actual gold standard value. And these blue dots are not clustering around the 0 on the Y-axis with just a few outliers. No, most of the blue dots are significant outliers.

The same goes for REM sleep. In fact, two of the subjects showed literally 3 hours fewer REM sleep than they actually got. If one of those subjects was me, and I was receiving such values on a consistent basis, then my sleep architecture might in reality be dysfunctional for getting too much REM rather than too little.

In other words, without knowing which blue dot that I am, I have no idea how good or bad my sleep actually is.

From the abstract of the above study, we see rather meager figures:

“From EBE analysis, ŌURA ring had a 96% sensitivity to detect sleep, and agreement of 65%, 51%, and 61%, in detecting “light sleep” (N1), “deep sleep” (N2 + N3), and REM sleep, respectively. Specificity in detecting wake was 48%.”

Specificity in detecting wake was 48%! If this was a medical test, it would never be approved by FDA.

A specificity of 48% means that there is a 48% chance that someone is awake when the device says they are asleep.

That is horrible.

But is it reliably bad? We don’t even know that.

In a recent interview with Matthew Walker, podcaster Peter Attia asked whether, once establishing a baseline, the Oura ring was reliable at least for predicting changes in sleep. A user of the ring, Peter presumably wanted to be reassured that the ring data had some utility. Without elaborating–and I suspect to assuage Peter’s fears–Dr. Walker responded coolly in the affirmative.

But even this is not known. From my searches, nobody has ever actually scientifically studied how reliable the ring is from night to night versus polysomnography. That is to say, nobody knows whether the biases the ring shows on one night for one user are necessarily replicated the following night. Nobody knows whether what it is estimating as sleep is anymore than a very rough estimate that changes substantially from night to night based on factors that are irrelevant to sleep.

The bitter truth: all sleep trackers suck

What about compared to my other sleep tracking device: the Garmin Fenix 5S?

2 hours and 57 minutes of REM sleep! Or 11-fold more REM sleep than my Oura ring.

19 minutes deep sleep! Or 8-fold less deep sleep.

8 minutes awake! Or 14-fold fewer minutes awake.

Which one is right? The answer: they both suck. Because it turns out that many wrist sleep trackers have been validated as well. And they all suck. According to one study, the Fitbit Charge 2 is actually better than the Oura ring. Here are its data:

It still really sucks.

Again, the question isn’t even what the average agreement between these sleep trackers and polysomnography is. The question is WHICH BLUE DOT ARE WE?

Even if these trackers are, say, 60% accurate, that doesn’t mean it is going to be accurate 60% of the time for us. It could be much worse for us than average. Or better. How would we know?

We cannot trust the sleep tracker’s data independent of data from a sleep lab. We cannot even trust it to be biased in a consistent manner. Because those data do not exist either.

The science is clear: if you want to track your sleep, go to a sleep lab

Sleep trackers have the veneer of science. Thus we think the results they report are meaningful. But just because someone has studied a given sleep tracker does not mean that the sleep tracker is reliable. It might be (and in the case of the Oura ring is) shown to be terribly unreliable.

The science of HR tracking of sleep phases is not weak. In fact, at the current stage of technology, the science is that these trackers are demonstrably not reliable.

So unless you have access to a sleep lab that you can use for several nights over a period of time, you have zero idea how accurate your sleep tracker is for you. It might be accurate or it might be terribly inaccurate.

Nocebo is a health risk for using the Oura ring

Companies like Oura that offer sleep tracking should also be very clear about the serious if not disqualifying limitations of their technology. And I now believe that devices with sleep tracking should give the option to users to disable the sleep tracking feature.

Given the evidence of demonstrated nocebo from biomarker tracking in multiple scientific studies, the option to disable sleep tracking on these devices would be prudent indeed.

According to the above study, sleep trackers like the Oura ring can exert nocebo effects that affect our cognition and potentially our health. In the above study’s case, the nocebo affected cognition.

According to other studies, simply receiving genetic data causes our body’s physiology to change in the direction of what that genetic data would predict.

But because nocebo can also affect immunity and a diverse range of other physiological processes (for example, see Jo Marchant’s book Cure or take a look at the research of Harvard professor Ted Kaptchuk), nocebo from sleep tracking technology has the potential to cause chronic health harm on stress or immunity.

When we wake up feeling good, look at our sleep tracker data and see we have had a terrible night of sleep, we might suddenly start not feeling so good–and the ring data itself might be inducing these effects.

We should demand that Oura give the option to disable sleep tracking

Therefore, given this potential negative effects on a broad scale on users experiencing negative (and substantially false) sleep data, the Oura ring company should include the option to disable displaying sleep tracking data altogether.

Why do I keep using my Oura ring? Because nighttime HRV, resting heart rate, and temperature are awesome. But the sleep staging? Not so much.

Besides, even if we find out our sleep is in fact actually terrible, despite keeping a consistent schedule, etc.–what can we actually do about it? It is questionable to what degree the data–even if they were not fatally flawed–are even actionable.

Share this post and demand to Oura that they make sleep staging a feature that can be disabled. For many of us, it should be.

Enjoy this post? Help me smash the wellness industry by supporting me here:

Eternal graduate student,
Someday physician,
Always yours,

I want to make a case for the way science should be done in the health sciences–in a way that is totally different from physics–and I want to make this case using some of the evidence available on the link between animal protein and cardiovascular disease. I will use this as a particular example of the use of scientific inference, as is used in medicine to make a case that many might find persuasive, but which would be dismissed in other so-called hard sciences. I want to explain in particular why this kind of scientific inference is necessary for medicine–owing to its practical orientation–as compared to such other sciences.

Rabbits LDL levels increase with animal protein. Rabbits get atherosclerosis with animal protein via LDL. Humans increase LDL with animal protein (here and here). Humans get atherosclerosis via LDL.

We might infer therefore that animal protein causes atherosclerosis in humans.

To be clear, I’m not suggesting that animal protein definitely causes atherosclerosis in humans. But if it doesn’t, and if one accepts the lipid hypothesis, one would need to postulate some protective factor from animal protein that could counter its LDL-producing effects.

It follows that the more parsimonious (simpler) explanation is that animal protein is atherogenic in humans, and that is why an association between animal protein and cardiovascular disease is found in many (but not all) epidemiological studies.

This interpretation as far as I can tell is the simplest explanation comporting with the evidence. Again, I’m not saying it is true or that the evidence is strong. Clearly, the evidence is weak; direct human evidence in RCT would be strong. But it is the evidence that we have.

And here’s where the philosophy of science comes in. The health sciences (which includes nutrition science) are not like cosmology. In cosmology, conclusions don’t much matter, because nobody has to make decisions based on these conclusions. One can be agnostic and reserve one’s judgment on many issues.

However, in the health sciences, one must make a decision: do I take X action or not-X? What about Y? And Z? In the case of nutrition science, one must eat and thus while one can be scientifically agnostic, one must come to some practical conclusion. Because one cannot choose not to eat.

In such muddy sciences as the health sciences, it is not that we should be scientific idiots and go with any weak evidence to form some loaded and unjustified conclusions–some popular writers look to portray us in exactly in this way.

It is that as practical people who live in the real world, we must make decisions.

In medicine, we must make a decision based on incomplete information.

If the question of animal protein were a cosmological question with no practical relevance, I would be coming to a conclusion based on insufficient evidence to justify it. My conclusion would in fact be partly speculative: I am filling in the gaps in evidence (specifically, an RCT demonstrating an effect of animal protein on cardiovascular disease risk) with logic. In a formal sense, that isn’t science.

However, because this is about life and death and a decision I must make one way or the other, what constitutes good or bad reasoning in this particular domain is entirely different than what constitutes good or bad reasoning in cosmology.

Let us use an example to illustrate the case, and to more clearly illuminate what health science is, compared to a science like cosmology.

What if all politicians based their policy decisions on RCTs, with perfect design, generalizability, power, etc.? No decisions would ever be made.

This is exactly the situation in many areas of health science, medicine, nutrition science, etc.

That is why comparing medicine to physics and decrying the former for not measuring up to the latter is asinine. It totally misses the point of what medicine is about: making practical decisions.

When the point is making practical decisions, evidentiary standards radically shift: they go from a) austere scientific principles to b) making use of whatever is at hand to accomplish the task in the most competent way available.

I’m not the first person to say these things. They are obvious.

As a final note, this does not mean that we should abandon careful scientific principles. On the contrary, the difficulty of coming to good conclusions because strong evidence is so often absent requires us to double down on rigor and try to produce more of it–to ground the decisions we must make in increasingly strong evidence. It is precisely because we often have such little good evidence that we should take good science so seriously.

But given the flaws in the science, how do we deal with it scientifically *now*? This is a philosophical question and a lot more comes into play than I have just addressed. But I wanted to make a bare-bones case explaining one point of view.

Also: I’m not saying that the effect of animal protein in rabbits is the basis of my views. Rather, I’m saying indirect evidence such as that suggested by animal models has a greater role to play in forming conclusions in the health sciences than one like cosmology or physics. In the latter, such evidence would be frustrating. In health science, such evidence is sometimes necessary!

It’s nothing mind-blowing, but I guess these things sometimes need to be said–or rather, these assumptions about the way we approach science should be articulated because that’s the first step to understanding–both of each other and ourselves–and discussion at a deeper level.


Help me communicate good science–and how to think about it–by supporting me at

— Kevin

The ENCORE trial, a 4-month lifestyle intervention published in 2010, showed reduction in BP similar to that achievable by drugs.

Subjects were assigned to three groups: usual diet (UC), DASH alone (DASH-A), and DASH plus weight management (DASH-WM).

The reductions in blood pressure were 3.4/3.8 mm Hg (systolic/diastolic) for UC, 11.2/7.5 mm Hg for DASH-A, and 16.1/9.9 mm Hg for DASH-WM.

16.1/9.9 mm Hg reduction in blood pressure! That’s a lot.

For reference, the average BP reduction of drugs is 12.5/9.5 mm Hg.

By drug class, that’s:

Alpha1-blockers 15.5/11.7 mm Hg;
Beta1-blockers by 14.8/12.2 mm Hg;
Calcium channel blockers by 15.3/10.5 mm Hg;
Thiazide diuretics by 15.3/9.8 mm Hg; and
Loop diuretics by 15.8/8.2 mm Hg.


So this trial’s strongest intervention’s impact is approximately the same as that of commonly prescribed blood pressure medications. And because it is a lifestyle intervention instead of a drug, its benefits are likely to extend beyond that of blood pressure.

And the disease prevention potential is likely to be substantial. As authors write: “Similar BP reductions have been achieved in placebo-controlled treatment trials and have resulted in a lowering of stroke risk by approximately 40% and a reduction in ischemic heart disease events by about 25%.”

But so what? This trial was conducted in a manner quite alien from how most medical practices might try to implement such lifestyle changes:

  1. Subjects underwent 2 weeks of controlled feeding according to their assigned intervention;
  2. During this, they met with a nutritionist twice per week;
  3. They were weighed every other day during this period;
  4. They were given a precise estimate of calorie intake based on sophisticated research models;
  5. Following this first 2 weeks, they were weighed and met for 30 to 45 minutes in small group with a nutritionist every week for the next 14 weeks, making adjustments in intake to meet study targets;
  6. DASH-WM subjects received an additional weekly cognitive-behavioral weight loss intervention and attended supervised exercise sessions 3 times per week.

Here are the details on #6:

That is a lot of behavioral intervention.

Correspondingly, reported adherence was high. In the discussion, authors write:

“The BP reductions achieved in our DASH-A and DASH-WM interventions were greater than those described in the PREMIER study and in other trials of lifestyle modification. The reasons for the greater benefit from the current ENCORE intervention could be attributed to the greater weight loss and excellent adherence to the DASH diet and exercise sessions.”

In other words, the behavioral intervention counted for a substantial portion of the effect.

Just how useful, therefore, is this study? Sure, weight loss, physical activity, etc. reduce hypertension. But if a clinical trial’s intervention is so far outside of the scope or practicality of clinical practice, because it relies on such an intensive behavioral intervention, what use are the study’s findings?

What, for instance, is the long-term adherence to this diet in the real world?

Likewise, are there any ways to ensure such long-term adherence in an economical way in the real world?

Many studies, so intent on showing an effect, invest an inordinate amount of resources–far more than could realistically be deployed in regular clinical practice–in ensuring adherence, only to undermine the applicability of the study to real life.

Yes, DASH-WM produces clinically significant reductions in blood pressure. But does this mean that recommending a patient consume DASH and lose weight will produce the same impact?

The answer to that, according to the overwhelming sum of existing literature, is a resounding no. We know from the authors themselves that the more intensive behavioral support likely caused a substantial proportion of the effect seen. Likewise, many other lifestyle intervention trials show the same thing: behavioral management is the overriding determinant of the adherence to, and therefore success of, lifestyle interventions.

In other words, this trial, while laudable in the health impact, is clinically discordant: it cannot reasonably be applied in a clinical environment without serious protocol modification.

As valuable as this trial is in demonstrating an important if unclear physiological effect of lifestyle on blood pressure, it would be nice if lifestyle intervention clinical trials were designed bearing in mind the real-world clinical circumstances of practicing healthcare professionals. Until that happens, such trials are of dubious clinical application and of questionable relevance to guidelines.


Help me communicate good science (and smash the wellness industry) by donating.


I had the idea to do an Ask Kevin series. Whenever I get a question I want to spend some time answering, I will write about it here.

I was asked:

A lot of my psychological examinations of quacks and their followers are based on my own experiences of being a follower of many quacks. My first foray into the nutrition space was Paleo, and that lasted 10 years. There is a lot of quackery in Paleo circles. I have been up and down conspiracy lane so many times, personally, and to many great extremes.

I have disbelieved that LDL caused cardiovascular disease. I have hated Monsanto and glyphosate. I have believed the low-fat guidelines caused the obesity epidemic. I have thought we should all return to a pastoral way of life and jettison all industrial agriculture. I have thought that medicine is a profiteering scam, a way for elites to exploit and oppress everyone else.

So when I write about the psychology of quackery, I am often writing about my old self.

Another source of mine is Hannah Arendt’s Origins of Totalitarianism, where Arendt examines the rise of Hitler. Parts of Arendt’s analysis apply to any fake news phenomenon or any quackery. The analysis does not apply just to the Nazis. She addresses the popular appeal of lying in perhaps unparalleled depth, and I draw from that in making sense of my own experience and what I continue to see.

That said, I think there is some confusion latent in the question. So I will try to flesh out the psychology of lying among quacks in greater detail here:

When a lie is particularly subversive of something you hate, that can make it attractive and funny. Hearing such a lie is a relief. It’s like a weapon thrust into a perceived enemy or oppressor. This relief, this sense that an oppressive norm has been violated, this is what makes it funny.

Lying in a particularly subversive way can be very funny. This is the sort of motivation behind the “humor” of 4chan or of alt-right troll armies. Or the 2016 Trump campaign. Part of the reason its lies were appealing and sometimes very funny was because they were so subversive. 

That these were lies does not register as lies to people locked into this way of seeing the world. To them, this is not a contest of facts. This is a contest of power. In a contest of power, the point is to violate the enemy, and the truth is secondary. Or rather, the truth is subsumed to the contest of power. Truth is what wins. Because the enemy is wrong. Whatever can beat this enemy is true.

Resentment or hatred drives this way of seeing the world.

Sometimes the best way to win the power game is to lie. In fact, being a brazen lie is exactly what is most likely to win the game: it opens a new hole, a new front. It also distracts. A hundred lies are like a hundred missiles, and they eventually become overwhelming. A multiplication of lies can be appealing to those engaged in The Fight. They might not do it themselves, but they will welcome and support those brazen enough to do it. This is in part how the constantly more extreme forms of health ideologies are getting their footing: they are building off of less extreme versions of themselves, iteratively.

For instance, carnivore from keto from Atkins from Paleo. Robb Wolf was long a supporter of Shawn Baker.

Being a brazen lie is also what makes it funny. Did he just say THAT? Of course, because it may contain a bit of truth, indirectly, the lie is justified. It isn’t completely false, and it is effective.

Put another way, the enemy is evil; therefore, anything that harms him is right, good, and therefore true. Something that brazenly harms him, an outrageous lie, is all the more relieving and therefore funny (and good). If the enemy is sufficiently evil, then the more lies heaped against him, the better.

Somebody trapped in this mode of thinking sometimes does and sometimes does not have the capacity to detect lies. If they are unintelligent or lack self-awareness, they do not have the capacity. If they are intelligent and sufficiently cynical, they can lie because they are earnest in their lies. They think they are doing the right thing. This is not because they think that lies are good, but because they think that lies against the enemy are good.

So to answer the question, whether the person really believes that their guru is lying is a case-by-case situation. The intelligent person trapped in this way of thinking does it with some self-awareness (and is thus creepy); the unintelligent and unreflective one believes the lies and is mostly or entirely unaware that they are lies.

Most people are not at the extreme ends of this way of thinking, but it does seem to be becoming more common. Such people do not have the capacity to think in terms of “truth”. That is, they cannot criticize their own closely held ideas. Because they cannot self-criticize, their thinking will always take the form of conflict: they will always be trying to undermine their enemies, and in sufficiently extreme cases, even through lying or in endorsing their leaders’ lies.

This way of thinking is entirely of a different character than that of the scientist and is mendacious in its very core. But it is not experienced as bad by the person thinking this way. It is experienced as good. And in many cases, the lies will not be readily detected by the person endorsing them. Truth, as something arrived at through a careful and impartial examination of the facts, is not even a concern. In a way, to such people, such a notion is absurd. They are engaged in a battle of good versus evil. There is no room for such a notion of truth; indeed, such a notion of truth itself seems dishonest.

To understand this way of thinking, one must understand that truth for such people exists along an axis of ideology, not of impartiality. The ideology reigns supreme; it is the unquestioned truth; and those who follow it are its foot-soldiers and servants. Those who ask for critical thinking are the real liars, trying to conceal a “truth” that is easy for everyone who is not blind (or lying) to see.

Two exciting studies with relevance to the currently popular high-meat, low-carbohydrate diets have been published in the past month.

One, conducted by Kevin Hall and Juen Guo at the NIH and co-authored by researchers at Columbia, Pennington, and Florida, looked at the changes in glucose and lipid markers important for cardiometabolic disease risk after switching to a ketogenic diet.

The other, led by Ronald Krauss’s group in Oakland, looked at the effects of red vs. white vs. non-meat sources of protein on important lipid markers of cardiovascular disease risk.

For short, we will call the first study “the keto study” and the second by the name preferred by the investigators: “Animal and Plant Protein and Cardiovascular Health”, or APPROACH.

What did these studies find? The keto study, predictably, found an increase in LDL cholesterol levels:

The effect of saturated fat on LDL-C levels is well-established, both by controlled feeding trials (see this meta-analysis of 84 trials) and long-term clinical trials. For the latter, there are at least three relevant trials, whose relevant findings will be very briefly summarized below.

Here are the results from the Los Angeles Veterans Trial when saturated fat was replaced by linoleic acid (a polyunsaturated fatty acid):

The results from a large 1968 trial published in the Lancet, this time replacing saturated fat with soybean oil:

And this one from the Oslo Diet Heart study with a similar design:

All said, it is important to note that many factors affect the response of each person to saturated fat, as noted in a 2010 review by Krauss and colleagues:

But what many people may not have expected were the effects of the ketogenic diet on CRP, an important inflammatory mediator that closely correlates with the progression of cardiovascular disease. Here is an illustration of this point from a review on cardiovascular risk factors:

And here are the unexpected results from the keto study (second-to-last row):

If we average weeks 3 and 4, we have 1.27 for the baseline diet (“BD” in the above table) and 1.63 for the ketogenic diet, a nearly 30% increase in just 3-4 weeks.

To make sense of these results, it is worth noting some details about the study design. 17 overweight or obese men were assigned to consume a weight-maintaining baseline diet (15% protein, 50% carbohydrate, 35% fat) on a metabolic ward (which has the subjects living for the entire study duration at the study site, receiving all food from staff and having weight and other biomarkers checked periodically). Once the subjects were shown to be weight-stable (within 2-3 weeks), the subjects then spent 4 weeks consuming this baseline diet, followed by 4 weeks consuming the ketogenic diet.

Since the study was published, it has been suggested that the changes in CRP on the ketogenic diet may have resulted from the baseline diet itself, since the study was not randomized and each participant consumed a ketogenic diet after consuming a baseline diet. However, this study’s findings with respect to CRP are consistent with other studies, including low-carbohydrate diet proponent and Harvard professor David Ludwig’s 2012 low-carbohydrate diet study (“CRP tended to be higher with the very low-carbohydrate diet (median [95% CI], 0.78 [0.38-1.92] mg/L for low-fat diet; 0.76 [0.50-2.20] mg/L for low–glycemic index diet; and 0.87 [0.57-2.69] mg/L for very low-carbohydrate diet; P for trend by glycemic load = .05″) and another study by Rankin and Turpyn from 2007 (“Although LC lost more weight (3.8 +/- 1.2 kg LC vs. 2.6 +/- 1.7 HC, p=0.04), CRP increased 25%; this factor was reduced 43% in HC (p=0.02)”).

It is reasonable therefore to regard the effect of the ketogenic diet on CRP as potentially real. This is important, because although professional cardiovascular disease researchers believe both lipids and inflammation are important for cardiovascular disease (and other factors) inflammation is frequently proposed as the sole cause of cardiovascular disease by many non-scientist advocates of low-carbohydrate diets. Yet, if both harmful lipids and major inflammatory markers like CRP are increased on a low-carbohydrate diet, then this poses a problem for the claim that low-carbohydrate dieting is innocuous for cardiovascular disease risk.

To be clear, for many people, most markers of cardiovascular disease risk (including LDL cholesterol and CRP) will decrease on a low-carbohydrate diet if weight loss is robust enough (see, e.g., here and here). However, what the studies above suggest is that if weight is maintainable at these lower levels after switching to a higher-carbohydrate diet, on average these lipid and inflammatory markers will improve still further. A key unresolved question is whether the ability to maintain such weight loss on a low-carbohydrate diet sufficiently compensates for the increases in these cardiovascular disease markers, if long-term weight maintenance on a low-carbohydrate diet is preferred. I suspect that it is–and thus, for those who can only maintain their weight loss on a low-carbohydrate diet, the tradeoff may on balance be worthwhile. But this remains to be seen. (Also, this may depend on the extent of the weight loss. More long-term weight loss would more likely justify the tradeoff.)

A similar calculation might be at play for the weight-stable management of type 2 diabetes with a low-carbohydrate diet. Might the gains from better glycemic regulation sufficient to offset the losses to lipids and CRP? Study authors, fortunately, put this concern to rest, noting that for subjects with type 2 diabetes, no such increase in CRP has been observed:

That inflammatory biomarker increases are also not seen among diabetics might suggest that the reduction in blood glucose is sufficiently anti-inflammatory so as to obviate any pro-inflammatory downside of the ketogenic diet.

This brings us to APPROACH, the second of the two studies that is the subject of this article. The findings of APPROACH:

  1. Meat versus non-meat protein increased LDL cholesterol;
  2. There was no difference in LDL response between white and red meat;
  3. Saturated fat increased LDL cholesterol;
  4. These effects were all independent of each other: meat increased LDL cholesterol independent of the saturated fat content, and vice versa.

The design of the study:

“Generally healthy men and women, 21–65 y, body mass index 20–35 kg/m2, were randomly assigned to 1 of 2 parallel arms (high or low SFA) and within each, allocated to red meat, white meat, and nonmeat protein diets consumed for 4 wk each in random order. The primary outcomes were LDL cholesterol, apolipoproteinB (apoB), small+medium LDL particles, and total/high-density lipoprotein cholesterol.”

Which authors visualized as follows:

The diets were as follows:

Here is a sample menu:

The study was an outpatient study, and participants picked up the food weekly and were weighed at that time:

And here are the study’s main findings:

And here:

Just eyeballing the LDL findings from Table 3 (the first the two tables immediately above), we see:

High-SFA vs. low-SFA red meat: 2.64mM vs. 2.35mM—a 12% increase
High-SFA vs. low-SFA white meat: 2.61mM vs. 2.38mM—a 10% increase
High-SFA vs. low-SFA red meat: 2.46mM vs. 2.22mM—an 11% increase

Meat (averaged) vs. nonmeat, high-SFA: 2.63mM vs. 2.46mM—a 7% increase
Meat (averaged) vs. nonmeat, low-SFA: 2.37mM vs. 2.22mM—a 7% increase

High-SFA meat (averaged) vs. low-SFA nonmeat: 2.63mM vs. 2.22mM—an 18% increase

In other words, according to these data and assuming that they are generalizable to the population as a whole…

An average high-saturated fat, high-meat dieter would be expected to have an 18% higher LDL than a low-saturated fat, low-meat eater.

To put this in perspective, PCSK9 mutations, which reduce the ability of PCSK9 to degrade the LDL receptor, which causes greater uptake of LDL, are associated with a 28% reduction in lifetime LDL cholesterol and an 88% reduction in the risk of cardiovascular disease.

Can we estimate the precise reduction in coronary heart disease risk resulting from a lifetime of such LDL reduction? After all, lifetime exposure to LDL, cumulatively and in an area-under-the-curve-fashion is what causes cardiovascular disease. This is beautifully demonstrated in the following figure, which shows how lifetime, genetic reduction in LDL produces a steeper decline in risk than that of a reduction in cohort studies with a median follow-up to 12 years, which produces a steeper reduction in risk than that observed in randomized clinical trials with a median follow-up of 5 years:

We can estimate such lifetime risk, by looking more closely at a study similar to the one that provided the blue line in the figure immediately above.

In short, the reduction in risk from lifelong LDL reduction (from birth) has been estimated by looking at genetic mutations that produce a lower LDL-C level, and then looking at risk of coronary heart disease for people with each mutation. These were then plotted to produce a graph that can estimate the effect of lifelong reduction in risk of death from CHD.

Since the figures given in the APPROACH paper are in millimoles/liters, we need to convert to mg/dl to use the above graph to make estimates. That is done by using the conversion factor 38.67. We can therefore recalculate the comparisons between diets in mg/dL, with risk reduction estimates as follows:

High-SFA vs. low-SFA red meat: 102.1mg/dL vs. 90.9mg/dL—a 20% decrease in CHD risk
High-SFA vs. low-SFA white meat: 100.9mg/dL vs. 92.0mg/dL—a 15% decrease in CHD risk
High-SFA vs. low-SFA red meat: 95.1mg/dL vs. 85.8mg/dL—an 17% decrease in CHD risk

Meat (averaged) vs. nonmeat, high-SFA: 101.7mg/dL vs. 95.1mg/dL—an 11% decrease in CHD risk
Meat (averaged) vs. nonmeat, low-SFA: 91.6mg/dL vs. 85.8mg/dL—an 11% decrease in CHD risk

High-SFA meat (averaged) vs. low-SFA nonmeat: 101.7mg/dL vs. 85.8mg/dL—a 29% decrease in CHD risk

(Note: I do not have access to the formula used to generate this graph, so I eye-balled the graph to provide the above estimates. This “eye-balling” is within a few % of a formula-calculated estimate, and therefore adequate for illustration purposes.)

This is a huge reduction in risk. Given that 360,000 Americans died of coronary heart disease in 2016, a 29% decrease in risk would save the lives of 104,400 people each year, or more than 1 million Americans per decade. That is 34 September 11s per year. Such a reduction in LDL-C would reduce the rate of physical disability by a similar magnitude. The magnitude of this benefit, if applied universally across the population, would therefore be several-fold greater than universally prescribed statin therapy for all patients with dyslipidemia.

This does not take into account the impact of elevated LDL on other cardiovascular diseases, such as stroke, cerebrovascular disease, and cancer, which are likely to dramatically increase the magnitude of these rough estimates.

One objection to this kind of thinking is that such a dietary intervention may have pleiotropic effects: it may impact health in other ways than coronary heart disease. I am eager for readers to share well-designed studies that demonstrate such effects.

Another objection to this analysis is that the American diet is not likely to be precisely the same as that consumed in the high-SFA, high-meat group, thereby making the benefits less substantial than those I have presented. While this is true, even if the benefits are only half of what I have mentioned, they would be a major achievement of public health.

Another objection is that the benefits of a lifetime dietary approach achieving such LDL reductions overstate what would be achieved clinically, since as mentioned, lifetime exposure to LDL has cumulative effects, and dietary approaches adopted in, say, middle age are likely therefore to produce less benefit. This is true, but my goal was to demonstrate what a lifelong, public health oriented transformation could achieve. If we wanted to calculate what could be achieved clinically, we should look at the first of the two graphs, included here a second time for convenience:

In this case, the magnitude of effect on LDL of the difference between high-SFA, high-meat and low-SFA, low-meat is only about 0.4mmol/L (remember: 2.63mM vs. 2.22mM), which along the red line translates into a modest ~9% reduction. This is perhaps one of the main reasons why clinical trials into LDL lowering via diet have shown such modest and uncertain effects: changing the diet during late age simply will not produce benefits that are as substantial as optimizing the diet at birth.

Authors note that the selective increase in large LDL and not in medium or small LDL implies that the changes may be less atherogenic than might be expected by looking at total LDL cholesterol changes. They write:

This claim is controversial, as the authors themselves acknowledge. Lipid experts indeed seem to be getting increasingly heated on the subject online:

Others leap into the fray:

An earlier review in the American Journal of the College of Cardiology agrees, writing that “Cholesterol, largely transported through the body as LDL-C, has clearly been established as a causal agent in atherosclerosis over many decades of extensive research. Regardless of size, LDL particles are atherogenic.” Kjellmo’s paper, linked in the tweet thread above, concurs.

Hedging probably because of the current lack of clarity in the literature, Krauss and colleagues conclude their paper:

It is important to note, however, that medium LDL certainly increased on the meat diet, and that there was a nonsignificant increase in small LDL. Would a higher number of subjects produced more clarity on this question?

In any case, what is the cause of the higher LDL levels in the meat vs. nonmeat diets? One might suspect that, because the large LDL was the fraction raised the most and because dietary cholesterol predominantly raises the large LDL fraction, that the dietary cholesterol found in the meat was perhaps the culprit. However, Krauss et al are quick to point out that this probably isn’t quite right:

One guess is that the dietary fiber in the matrix of the plant proteins (such as peas) might have contributed to the effects.

It is worth noting that fiber was nearly equated on all diets, suggesting that adding additional fiber to the diet in the form of a higher fruit and vegetable intake may not be sufficient to counteract the lipid-raising effects of animal protein.

According to these studies, because low-carbohydrate diets high in meat and saturated fat are likely to, on average, raise both inflammatory and lipid biomarkers in healthy people–if avoiding atherosclerosis is the goal, then minimizing animal-sourced protein while maximizing other high-nutrient foods is likely to be best practice according to the totality of current evidence. There may be plausible arguments in favor of low-carbohydrate diets if they help to maintain a sufficient degree of weight loss to mitigate the inflammatory or lipid effects of these diets. Likewise, among persons with type 1 or type 2 diabetes, the euglycemic effects of low-carbohydrate diets may outweigh any negatives. However, all else equal, it may be appropriate to exercise caution about committing to low-carbohydrate diets if weight and glucose can be optimally maintained with other dietary strategies.

If you have enjoyed this post or my other work, please consider donating or becoming a patron. Your support enables me to continue spending my time doing this.


The first two parts of the macronutrient trend series, which explores the association between changes in carbohydrates and fat in the diet and the obesity epidemic in America, are here (Part 1) and here (Part 2).

In the first post of this series, I reposted a response of mine, involving a wealth of data from the USDA food balance dataset, to Dr. Ludwig’s claim that the low-fat diet caused the obesity epidemic. In the second post, I have reposted Dr. Ludwig’s response. This third post in the series is the final reply to Dr. Ludwig. Still deeper analyses will follow these first three posts.

Without further ado.

(Originally posted January 14, 2018 on Medium: Copied in full, with added figures in the text.)

Thank you for the thoughtful response Dr. Ludwig.

I agree that food availability data are problematic, for some of the reasons that I discussed and you pointed out as well. So I also included self-reported intake data, which I think told a similar story. Still, I agree this data, too, is flawed. My hope was that by using both data-sets, despite their limitations, at the least we can say what the data say. Obviously, reality could be different from the data. And if it is, we all want to know that and understand why that should be.

This means that the picture I have tried to draw could be qualified or even refuted if there was another data-set or interpretation and a compelling justification for why that data-set or interpretation is better. However, I am not persuaded that the data-set that has been offered as an alternative is better.


  1. Stephen and Wald use self-reported data and thus, as self-reported data, this data-set in principle has the same flaws as the NHANES data. Why should we use it over NHANES?
  2. Stephen and Wald’s set extends from 1920–1984, covering only 4 years after Guidelines were published.
  3. S&W’s data-set includes studies using heterogeneous assessment methods. This could be a weakness if each method had particular biases yet different methods were used more or less frequently over time.
  4. More importantly, I am not sure that the S&W data-set support the “low-fat” argument. First, the decline in fat intake seemed to have begun before the dietary guidelines were released. And second, fat intake was projected at around 31% in 1920, yet the 1920 obesity rate was dramatically lower than in 2000, when the fat intake was higher, around 33%. See:

5. The CDC data *are* the NHANES data. They show that % fat fell from ~37% to ~33% from 1980 to 2000. As we both must agree: this is around a 4% change. Again, as we also both agree, this decrease in % corresponded to ZERO change in intake. The % decrease happened because carb intake increased, but the driver here was more carbs. Total grams fat intake stayed constant from 1980 to 2000 according to the NHANES data. [Editorial: this point will be addressed in much greater depth in part 4. – KB]

6. Finally, I do not think that using USDA survey data for first data point in 1960 and combining it with the NHANES data for 2000 represents best statistical practice. I think comparisons between years should be made within data-sets, using the same methodology. If this is done, I believe that best estimate, barring a new analysis from a different data-set that covers the years in question (1980 to 2010), is that fat intake changed about 4%. Again, this is consistent between USDA and NHANES, using within-data-set comparisons.

Moving on, we agree that in, e.g. the 1980 DGA, the brochure recommended increasing carbs. But ONLY if one limited fats. Moreover, it advised eating complex carbohydrates, not refined, and advised against sugars. See screencap:

And original source:

A diet rich in complex, unrefined carbohydrates is compatible with good health, as e.g. some of the macrobiotic diet trials for T2DM have recently shown (see my recent Twitter feed). But Americans took the message “carbohydrates,” and ignored the “complex” part.

The question should also be asked: Is it possible that Americans ate more refined carbohydrates for reasons quite apart from their (mis)understanding of the guidelines? I cannot answer this question, but I think it is a legitimate one.

Last, there is no disagreement that, in the context of the standard American diet, overconsuming refined carbohydrates is harmful to health, and that this is what Americans are doing.

We also agree that present low-fat recommendations should continue to be a subject of debate. Clearly we need to debate every aspect of the dietary guidelines. Again, my claim is only an historical one: that it is difficult to interpret the dietary trend data as one of decreasing fat, and thus, that it is difficult to blame the guidelines’ previous low fat recommendations for the obesity epidemic.

I remain open (and eager) to revise my positions. If we want to get policy right, clearly the facts need to be right too. I believe that doing this and making a positive impact on others’ lives is what both of us are primarily interested in achieving.

I thank you for the opportunity to have discussed this issue with a researcher of such a high level of scientific achievement and important cultural legacy. It is an honor.


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.

You can also find me on Twitter at @kevinnbass.


Other parts in this series:

Part 1

Part 2

There is a narrative online that Americans have become obese because they have too closely followed the Dietary Guidelines. I looked at this suggestion in depth in my post here and found it to be lacking. This post will have a narrower focus and look specifically at refined grains and sugar.

So did the Dietary Guidelines cause Americans to eat more refined grains and sugar? Let’s look at the facts.

Americans currently eat about a quarter of the whole grains recommended by the current Dietary Guidelines for Americans, and about double the recommended refined grains.


If Americans eat too many refined grains, can it be the Guidelines’ fault?

Indeed, a 2010 study showed that only 1% of Americans eat the quantity of grains recommended by the dietary guidelines. One percent! Meanwhile, as we can see above, the average American far exceeds recommended intake of refined grains.

Here is a screenshot of the 2015-2020 Dietary Guidelines for Americans showing that it recommends at least half of grains consumed to be unrefined:


Some claim that only current Guidelines included the recommendation to consume unrefined grains, and that the early Guidelines didn’t. This is false. The Dietary Guidelines for Americans from 1980 repeatedly emphasized eating unrefined carbohydrates:


Indeed, the entire 4th guideline was dedicated to recommending unrefined carbohydrate foods. Since there were only 7 guidelines, that means 14% of the 1980 DGA were dedicated to telling consumers to eat unrefined carbohydrates.

Here is the second and final page of the 4th guideline again:


Its summary statement says: “Select foods which are good sources of fiber and starch, such as whole grain breads and cereals, fruits and vegetables, beans, peas, and nuts.”

Does it say to eat refined grains and sugar? Does it say to eat donuts and pizza? Does it say to eat ice cream and cupcakes? No. It says to eat foods that Americans still don’t eat very much of. Many people do not eat any.

On the topic of sugars, do the Dietary Guidelines tell people to eat a bunch of sugar to “replace fat”–as is often claimed? No. The Guidelines tell people to limit sugar. In fact, #5 of the 7 guidelines is dedicated to this! Here is its summary statement:


According to USDA food availability data, Americans today eat more refined grains AND sugar than they did in 1970. Americans have not followed the dietary guidelines. Americans have largely ignored them.


There is a large and thriving low-carbohydrate diet industry worth hundreds of millions of dollars. One of its major figureheads is @DietDoctor1, who spreads this kind of misinformation on his website, in a way strikingly similar to many anti-vax websites:

This is not a small website! It claims to have over half a million subscribers and runs a lucrative membership-only part of the website.

On this website, many of the above myths are propagated widely. For example, here is @bigfatsurprise‘s presentation on the topic:

In this video, @bigfatsurprise propagates many of the myths that I have debunked in this thread, in large part by slickly presenting data that superficially supports her point of view and excluding data that does not. Because there is so much data, it is very easy to do this.

Spreading these myths is highly lucrative to these authors and doctors. The video I posted is only a preview. The full version is only available to those who have purchased memberships.

I am not sure what to do about this, but I am working on it with others. It is important to be vocal and assertive. These people are exploiting and generating confusion about nutrition to make millions. People deserve better than this.

(For more information about the carbohydrate and fat trends and their relationship to the dietary guidelines, please refer to the first half of my post The data overwhelmingly indicate that Americans do not follow the Dietary Guidelines.)


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.