A dream

I wake up at 7AM, alarm clock blaring. As I get to my feet, I look around frantically. Just a moment earlier, I had been hunched in a bunker, preparing for rocket launch as the bomb sirens blared.

I had been dreaming.

Blinking, I realize I am in my bedroom. And the rocket sirens? My alarm clock.

My tensed shoulders relax and I exhale.

After going to the bathroom and grabbing coffee, I sit down at my computer, beginning my morning ritual of checking Twitter–and my Oura ring’s sleep tracking data.

Throughout the entire sleep cycle, the Oura ring had been tracking every heartbeat and every hand movement. And because the heart’s activity is modulated by the vagus nerve, so the theory goes, the Oura ring can track brain activity by tracking heart activity. And according to company claims, by tracking the brain’s activity via heart activity, it can track sleep stages.

A nightmare

As I look down at my Oura ring’s data about my sleep stages on my smart phone, I see a depressingly familiar sight:

  • A sleep-scape pockmarked by white spikes, indicating night-time waking events–I have been nearly 2 hours awake while I had thought I had been sleeping
  • 16 minutes of total REM sleep out of more than eight hours in bed “asleep”
  • An impressive, nearly 3 hours of deep sleep. Well that’s nice at least

Great. I have a serious sleep disorder.

Then I looked at my overnight heart rate:

Not bad, except for the frequent gigantic spikes. (My wife claims that I often engage in somnolent, heated invective against dream opponents.)

Was that me thrashing about? Awake or asleep? Was I punching the air in the face? I muse.

Then I remember my dream and see that while I was dreaming, the ring recorded me as awake.

Hmm, I think, my brow furrowed.

This calls for PubMed

So I start trawling through PubMed. Here is what I found.

The Oura ring has been compared to polysomnography–the gold standard in sleep staging. While the company boasts that the ring is “scientifically validated” for sleep staging, we should use that term rather loosely. Scientifically validated just means scientifically studied. It actually pretty much sucks for sleep staging.

Here is a graph from a “validation” paper (link):

On the X-axis is the gold standard of polysomnography, and on the Y-axis is the deviation. N3 is deep sleep, and REM is, well, REM. What we see for N3 deep sleep is a nearly 200 minute range of deviation around the actual gold standard value. And these blue dots are not clustering around the 0 on the Y-axis with just a few outliers. No, most of the blue dots are significant outliers.

The same goes for REM sleep. In fact, two of the subjects showed literally 3 hours fewer REM sleep than they actually got. If one of those subjects was me, and I was receiving such values on a consistent basis, then my sleep architecture might in reality be dysfunctional for getting too much REM rather than too little.

In other words, without knowing which blue dot that I am, I have no idea how good or bad my sleep actually is.

From the abstract of the above study, we see rather meager figures:

“From EBE analysis, ŌURA ring had a 96% sensitivity to detect sleep, and agreement of 65%, 51%, and 61%, in detecting “light sleep” (N1), “deep sleep” (N2 + N3), and REM sleep, respectively. Specificity in detecting wake was 48%.”

Specificity in detecting wake was 48%! If this was a medical test, it would never be approved by FDA.

A specificity of 48% means that there is a 48% chance that someone is awake when the device says they are asleep.

That is horrible.

But is it reliably bad? We don’t even know that.

In a recent interview with Matthew Walker, podcaster Peter Attia asked whether, once establishing a baseline, the Oura ring was reliable at least for predicting changes in sleep. A user of the ring, Peter presumably wanted to be reassured that the ring data had some utility. Without elaborating–and I suspect to assuage Peter’s fears–Dr. Walker responded coolly in the affirmative.

But even this is not known. From my searches, nobody has ever actually scientifically studied how reliable the ring is from night to night versus polysomnography. That is to say, nobody knows whether the biases the ring shows on one night for one user are necessarily replicated the following night. Nobody knows whether what it is estimating as sleep is anymore than a very rough estimate that changes substantially from night to night based on factors that are irrelevant to sleep.

The bitter truth: all sleep trackers suck

What about compared to my other sleep tracking device: the Garmin Fenix 5S?

2 hours and 57 minutes of REM sleep! Or 11-fold more REM sleep than my Oura ring.

19 minutes deep sleep! Or 8-fold less deep sleep.

8 minutes awake! Or 14-fold fewer minutes awake.

Which one is right? The answer: they both suck. Because it turns out that many wrist sleep trackers have been validated as well. And they all suck. According to one study, the Fitbit Charge 2 is actually better than the Oura ring. Here are its data:

It still really sucks.

Again, the question isn’t even what the average agreement between these sleep trackers and polysomnography is. The question is WHICH BLUE DOT ARE WE?

Even if these trackers are, say, 60% accurate, that doesn’t mean it is going to be accurate 60% of the time for us. It could be much worse for us than average. Or better. How would we know?

We cannot trust the sleep tracker’s data independent of data from a sleep lab. We cannot even trust it to be biased in a consistent manner. Because those data do not exist either.

The science is clear: if you want to track your sleep, go to a sleep lab

Sleep trackers have the veneer of science. Thus we think the results they report are meaningful. But just because someone has studied a given sleep tracker does not mean that the sleep tracker is reliable. It might be (and in the case of the Oura ring is) shown to be terribly unreliable.

The science of HR tracking of sleep phases is not weak. In fact, at the current stage of technology, the science is that these trackers are demonstrably not reliable.

So unless you have access to a sleep lab that you can use for several nights over a period of time, you have zero idea how accurate your sleep tracker is for you. It might be accurate or it might be terribly inaccurate.

Nocebo is a health risk for using the Oura ring

Companies like Oura that offer sleep tracking should also be very clear about the serious if not disqualifying limitations of their technology. And I now believe that devices with sleep tracking should give the option to users to disable the sleep tracking feature.

Given the evidence of demonstrated nocebo from biomarker tracking in multiple scientific studies, the option to disable sleep tracking on these devices would be prudent indeed.

According to the above study, sleep trackers like the Oura ring can exert nocebo effects that affect our cognition and potentially our health. In the above study’s case, the nocebo affected cognition.

According to other studies, simply receiving genetic data causes our body’s physiology to change in the direction of what that genetic data would predict.

But because nocebo can also affect immunity and a diverse range of other physiological processes (for example, see Jo Marchant’s book Cure or take a look at the research of Harvard professor Ted Kaptchuk), nocebo from sleep tracking technology has the potential to cause chronic health harm on stress or immunity.

When we wake up feeling good, look at our sleep tracker data and see we have had a terrible night of sleep, we might suddenly start not feeling so good–and the ring data itself might be inducing these effects.

We should demand that Oura give the option to disable sleep tracking

Therefore, given this potential negative effects on a broad scale on users experiencing negative (and substantially false) sleep data, the Oura ring company should include the option to disable displaying sleep tracking data altogether.

Why do I keep using my Oura ring? Because nighttime HRV, resting heart rate, and temperature are awesome. But the sleep staging? Not so much.

Besides, even if we find out our sleep is in fact actually terrible, despite keeping a consistent schedule, etc.–what can we actually do about it? It is questionable to what degree the data–even if they were not fatally flawed–are even actionable.

Share this post and demand to Oura that they make sleep staging a feature that can be disabled. For many of us, it should be.

Enjoy this post? Help me smash the wellness industry by supporting me here: https://nutritionalrevolution.org/support-me/

Eternal graduate student,
Someday physician,
Always yours,

I want to make a case for the way science should be done in the health sciences–in a way that is totally different from physics–and I want to make this case using some of the evidence available on the link between animal protein and cardiovascular disease. I will use this as a particular example of the use of scientific inference, as is used in medicine to make a case that many might find persuasive, but which would be dismissed in other so-called hard sciences. I want to explain in particular why this kind of scientific inference is necessary for medicine–owing to its practical orientation–as compared to such other sciences.

Rabbits LDL levels increase with animal protein. Rabbits get atherosclerosis with animal protein via LDL. Humans increase LDL with animal protein (here and here). Humans get atherosclerosis via LDL.

We might infer therefore that animal protein causes atherosclerosis in humans.

To be clear, I’m not suggesting that animal protein definitely causes atherosclerosis in humans. But if it doesn’t, and if one accepts the lipid hypothesis, one would need to postulate some protective factor from animal protein that could counter its LDL-producing effects.

It follows that the more parsimonious (simpler) explanation is that animal protein is atherogenic in humans, and that is why an association between animal protein and cardiovascular disease is found in many (but not all) epidemiological studies.

This interpretation as far as I can tell is the simplest explanation comporting with the evidence. Again, I’m not saying it is true or that the evidence is strong. Clearly, the evidence is weak; direct human evidence in RCT would be strong. But it is the evidence that we have.

And here’s where the philosophy of science comes in. The health sciences (which includes nutrition science) are not like cosmology. In cosmology, conclusions don’t much matter, because nobody has to make decisions based on these conclusions. One can be agnostic and reserve one’s judgment on many issues.

However, in the health sciences, one must make a decision: do I take X action or not-X? What about Y? And Z? In the case of nutrition science, one must eat and thus while one can be scientifically agnostic, one must come to some practical conclusion. Because one cannot choose not to eat.

In such muddy sciences as the health sciences, it is not that we should be scientific idiots and go with any weak evidence to form some loaded and unjustified conclusions–some popular writers look to portray us in exactly in this way.

It is that as practical people who live in the real world, we must make decisions.

In medicine, we must make a decision based on incomplete information.

If the question of animal protein were a cosmological question with no practical relevance, I would be coming to a conclusion based on insufficient evidence to justify it. My conclusion would in fact be partly speculative: I am filling in the gaps in evidence (specifically, an RCT demonstrating an effect of animal protein on cardiovascular disease risk) with logic. In a formal sense, that isn’t science.

However, because this is about life and death and a decision I must make one way or the other, what constitutes good or bad reasoning in this particular domain is entirely different than what constitutes good or bad reasoning in cosmology.

Let us use an example to illustrate the case, and to more clearly illuminate what health science is, compared to a science like cosmology.

What if all politicians based their policy decisions on RCTs, with perfect design, generalizability, power, etc.? No decisions would ever be made.

This is exactly the situation in many areas of health science, medicine, nutrition science, etc.

That is why comparing medicine to physics and decrying the former for not measuring up to the latter is asinine. It totally misses the point of what medicine is about: making practical decisions.

When the point is making practical decisions, evidentiary standards radically shift: they go from a) austere scientific principles to b) making use of whatever is at hand to accomplish the task in the most competent way available.

I’m not the first person to say these things. They are obvious.

As a final note, this does not mean that we should abandon careful scientific principles. On the contrary, the difficulty of coming to good conclusions because strong evidence is so often absent requires us to double down on rigor and try to produce more of it–to ground the decisions we must make in increasingly strong evidence. It is precisely because we often have such little good evidence that we should take good science so seriously.

But given the flaws in the science, how do we deal with it scientifically *now*? This is a philosophical question and a lot more comes into play than I have just addressed. But I wanted to make a bare-bones case explaining one point of view.

Also: I’m not saying that the effect of animal protein in rabbits is the basis of my views. Rather, I’m saying indirect evidence such as that suggested by animal models has a greater role to play in forming conclusions in the health sciences than one like cosmology or physics. In the latter, such evidence would be frustrating. In health science, such evidence is sometimes necessary!

It’s nothing mind-blowing, but I guess these things sometimes need to be said–or rather, these assumptions about the way we approach science should be articulated because that’s the first step to understanding–both of each other and ourselves–and discussion at a deeper level.


Help me communicate good science–and how to think about it–by supporting me at https://nutritionalrevolution.org/support-me/.

— Kevin

The ENCORE trial, a 4-month lifestyle intervention published in 2010, showed reduction in BP similar to that achievable by drugs. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3633078/

Subjects were assigned to three groups: usual diet (UC), DASH alone (DASH-A), and DASH plus weight management (DASH-WM).

The reductions in blood pressure were 3.4/3.8 mm Hg (systolic/diastolic) for UC, 11.2/7.5 mm Hg for DASH-A, and 16.1/9.9 mm Hg for DASH-WM.

16.1/9.9 mm Hg reduction in blood pressure! That’s a lot.

For reference, the average BP reduction of drugs is 12.5/9.5 mm Hg.

By drug class, that’s:

Alpha1-blockers 15.5/11.7 mm Hg;
Beta1-blockers by 14.8/12.2 mm Hg;
Calcium channel blockers by 15.3/10.5 mm Hg;
Thiazide diuretics by 15.3/9.8 mm Hg; and
Loop diuretics by 15.8/8.2 mm Hg.

Reference: https://www.ncbi.nlm.nih.gov/pubmed/16053990

So this trial’s strongest intervention’s impact is approximately the same as that of commonly prescribed blood pressure medications. And because it is a lifestyle intervention instead of a drug, its benefits are likely to extend beyond that of blood pressure.

And the disease prevention potential is likely to be substantial. As authors write: “Similar BP reductions have been achieved in placebo-controlled treatment trials and have resulted in a lowering of stroke risk by approximately 40% and a reduction in ischemic heart disease events by about 25%.”

But so what? This trial was conducted in a manner quite alien from how most medical practices might try to implement such lifestyle changes:

  1. Subjects underwent 2 weeks of controlled feeding according to their assigned intervention;
  2. During this, they met with a nutritionist twice per week;
  3. They were weighed every other day during this period;
  4. They were given a precise estimate of calorie intake based on sophisticated research models;
  5. Following this first 2 weeks, they were weighed and met for 30 to 45 minutes in small group with a nutritionist every week for the next 14 weeks, making adjustments in intake to meet study targets;
  6. DASH-WM subjects received an additional weekly cognitive-behavioral weight loss intervention and attended supervised exercise sessions 3 times per week.

Here are the details on #6:

That is a lot of behavioral intervention.

Correspondingly, reported adherence was high. In the discussion, authors write:

“The BP reductions achieved in our DASH-A and DASH-WM interventions were greater than those described in the PREMIER study and in other trials of lifestyle modification. The reasons for the greater benefit from the current ENCORE intervention could be attributed to the greater weight loss and excellent adherence to the DASH diet and exercise sessions.”

In other words, the behavioral intervention counted for a substantial portion of the effect.

Just how useful, therefore, is this study? Sure, weight loss, physical activity, etc. reduce hypertension. But if a clinical trial’s intervention is so far outside of the scope or practicality of clinical practice, because it relies on such an intensive behavioral intervention, what use are the study’s findings?

What, for instance, is the long-term adherence to this diet in the real world?

Likewise, are there any ways to ensure such long-term adherence in an economical way in the real world?

Many studies, so intent on showing an effect, invest an inordinate amount of resources–far more than could realistically be deployed in regular clinical practice–in ensuring adherence, only to undermine the applicability of the study to real life.

Yes, DASH-WM produces clinically significant reductions in blood pressure. But does this mean that recommending a patient consume DASH and lose weight will produce the same impact?

The answer to that, according to the overwhelming sum of existing literature, is a resounding no. We know from the authors themselves that the more intensive behavioral support likely caused a substantial proportion of the effect seen. Likewise, many other lifestyle intervention trials show the same thing: behavioral management is the overriding determinant of the adherence to, and therefore success of, lifestyle interventions.

In other words, this trial, while laudable in the health impact, is clinically discordant: it cannot reasonably be applied in a clinical environment without serious protocol modification.

As valuable as this trial is in demonstrating an important if unclear physiological effect of lifestyle on blood pressure, it would be nice if lifestyle intervention clinical trials were designed bearing in mind the real-world clinical circumstances of practicing healthcare professionals. Until that happens, such trials are of dubious clinical application and of questionable relevance to guidelines.


Help me communicate good science (and smash the wellness industry) by donating.


I had the idea to do an Ask Kevin series. Whenever I get a question I want to spend some time answering, I will write about it here.

I was asked:

A lot of my psychological examinations of quacks and their followers are based on my own experiences of being a follower of many quacks. My first foray into the nutrition space was Paleo, and that lasted 10 years. There is a lot of quackery in Paleo circles. I have been up and down conspiracy lane so many times, personally, and to many great extremes.

I have disbelieved that LDL caused cardiovascular disease. I have hated Monsanto and glyphosate. I have believed the low-fat guidelines caused the obesity epidemic. I have thought we should all return to a pastoral way of life and jettison all industrial agriculture. I have thought that medicine is a profiteering scam, a way for elites to exploit and oppress everyone else.

So when I write about the psychology of quackery, I am often writing about my old self.

Another source of mine is Hannah Arendt’s Origins of Totalitarianism, where Arendt examines the rise of Hitler. Parts of Arendt’s analysis apply to any fake news phenomenon or any quackery. The analysis does not apply just to the Nazis. She addresses the popular appeal of lying in perhaps unparalleled depth, and I draw from that in making sense of my own experience and what I continue to see.

That said, I think there is some confusion latent in the question. So I will try to flesh out the psychology of lying among quacks in greater detail here:

When a lie is particularly subversive of something you hate, that can make it attractive and funny. Hearing such a lie is a relief. It’s like a weapon thrust into a perceived enemy or oppressor. This relief, this sense that an oppressive norm has been violated, this is what makes it funny.

Lying in a particularly subversive way can be very funny. This is the sort of motivation behind the “humor” of 4chan or of alt-right troll armies. Or the 2016 Trump campaign. Part of the reason its lies were appealing and sometimes very funny was because they were so subversive. 

That these were lies does not register as lies to people locked into this way of seeing the world. To them, this is not a contest of facts. This is a contest of power. In a contest of power, the point is to violate the enemy, and the truth is secondary. Or rather, the truth is subsumed to the contest of power. Truth is what wins. Because the enemy is wrong. Whatever can beat this enemy is true.

Resentment or hatred drives this way of seeing the world.

Sometimes the best way to win the power game is to lie. In fact, being a brazen lie is exactly what is most likely to win the game: it opens a new hole, a new front. It also distracts. A hundred lies are like a hundred missiles, and they eventually become overwhelming. A multiplication of lies can be appealing to those engaged in The Fight. They might not do it themselves, but they will welcome and support those brazen enough to do it. This is in part how the constantly more extreme forms of health ideologies are getting their footing: they are building off of less extreme versions of themselves, iteratively.

For instance, carnivore from keto from Atkins from Paleo. Robb Wolf was long a supporter of Shawn Baker.

Being a brazen lie is also what makes it funny. Did he just say THAT? Of course, because it may contain a bit of truth, indirectly, the lie is justified. It isn’t completely false, and it is effective.

Put another way, the enemy is evil; therefore, anything that harms him is right, good, and therefore true. Something that brazenly harms him, an outrageous lie, is all the more relieving and therefore funny (and good). If the enemy is sufficiently evil, then the more lies heaped against him, the better.

Somebody trapped in this mode of thinking sometimes does and sometimes does not have the capacity to detect lies. If they are unintelligent or lack self-awareness, they do not have the capacity. If they are intelligent and sufficiently cynical, they can lie because they are earnest in their lies. They think they are doing the right thing. This is not because they think that lies are good, but because they think that lies against the enemy are good.

So to answer the question, whether the person really believes that their guru is lying is a case-by-case situation. The intelligent person trapped in this way of thinking does it with some self-awareness (and is thus creepy); the unintelligent and unreflective one believes the lies and is mostly or entirely unaware that they are lies.

Most people are not at the extreme ends of this way of thinking, but it does seem to be becoming more common. Such people do not have the capacity to think in terms of “truth”. That is, they cannot criticize their own closely held ideas. Because they cannot self-criticize, their thinking will always take the form of conflict: they will always be trying to undermine their enemies, and in sufficiently extreme cases, even through lying or in endorsing their leaders’ lies.

This way of thinking is entirely of a different character than that of the scientist and is mendacious in its very core. But it is not experienced as bad by the person thinking this way. It is experienced as good. And in many cases, the lies will not be readily detected by the person endorsing them. Truth, as something arrived at through a careful and impartial examination of the facts, is not even a concern. In a way, to such people, such a notion is absurd. They are engaged in a battle of good versus evil. There is no room for such a notion of truth; indeed, such a notion of truth itself seems dishonest.

To understand this way of thinking, one must understand that truth for such people exists along an axis of ideology, not of impartiality. The ideology reigns supreme; it is the unquestioned truth; and those who follow it are its foot-soldiers and servants. Those who ask for critical thinking are the real liars, trying to conceal a “truth” that is easy for everyone who is not blind (or lying) to see.

Two exciting studies with relevance to the currently popular high-meat, low-carbohydrate diets have been published in the past month.

One, conducted by Kevin Hall and Juen Guo at the NIH and co-authored by researchers at Columbia, Pennington, and Florida, looked at the changes in glucose and lipid markers important for cardiometabolic disease risk after switching to a ketogenic diet.


The other, led by Ronald Krauss’s group in Oakland, looked at the effects of red vs. white vs. non-meat sources of protein on important lipid markers of cardiovascular disease risk.


For short, we will call the first study “the keto study” and the second by the name preferred by the investigators: “Animal and Plant Protein and Cardiovascular Health”, or APPROACH.

What did these studies find? The keto study, predictably, found an increase in LDL cholesterol levels:

The effect of saturated fat on LDL-C levels is well-established, both by controlled feeding trials (see this meta-analysis of 84 trials) and long-term clinical trials. For the latter, there are at least three relevant trials, whose relevant findings will be very briefly summarized below.

Here are the results from the Los Angeles Veterans Trial when saturated fat was replaced by linoleic acid (a polyunsaturated fatty acid):


The results from a large 1968 trial published in the Lancet, this time replacing saturated fat with soybean oil:


And this one from the Oslo Diet Heart study with a similar design:


All said, it is important to note that many factors affect the response of each person to saturated fat, as noted in a 2010 review by Krauss and colleagues:


But what many people may not have expected were the effects of the ketogenic diet on CRP, an important inflammatory mediator that closely correlates with the progression of cardiovascular disease. Here is an illustration of this point from a review on cardiovascular risk factors:

And here are the unexpected results from the keto study (second-to-last row):


If we average weeks 3 and 4, we have 1.27 for the baseline diet (“BD” in the above table) and 1.63 for the ketogenic diet, a nearly 30% increase in just 3-4 weeks.

To make sense of these results, it is worth noting some details about the study design. 17 overweight or obese men were assigned to consume a weight-maintaining baseline diet (15% protein, 50% carbohydrate, 35% fat) on a metabolic ward (which has the subjects living for the entire study duration at the study site, receiving all food from staff and having weight and other biomarkers checked periodically). Once the subjects were shown to be weight-stable (within 2-3 weeks), the subjects then spent 4 weeks consuming this baseline diet, followed by 4 weeks consuming the ketogenic diet.

Since the study was published, it has been suggested that the changes in CRP on the ketogenic diet may have resulted from the baseline diet itself, since the study was not randomized and each participant consumed a ketogenic diet after consuming a baseline diet. However, this study’s findings with respect to CRP are consistent with other studies, including low-carbohydrate diet proponent and Harvard professor David Ludwig’s 2012 low-carbohydrate diet study (“CRP tended to be higher with the very low-carbohydrate diet (median [95% CI], 0.78 [0.38-1.92] mg/L for low-fat diet; 0.76 [0.50-2.20] mg/L for low–glycemic index diet; and 0.87 [0.57-2.69] mg/L for very low-carbohydrate diet; P for trend by glycemic load = .05″) and another study by Rankin and Turpyn from 2007 (“Although LC lost more weight (3.8 +/- 1.2 kg LC vs. 2.6 +/- 1.7 HC, p=0.04), CRP increased 25%; this factor was reduced 43% in HC (p=0.02)”).

It is reasonable therefore to regard the effect of the ketogenic diet on CRP as potentially real. This is important, because although professional cardiovascular disease researchers believe both lipids and inflammation are important for cardiovascular disease (and other factors) inflammation is frequently proposed as the sole cause of cardiovascular disease by many non-scientist advocates of low-carbohydrate diets. Yet, if both harmful lipids and major inflammatory markers like CRP are increased on a low-carbohydrate diet, then this poses a problem for the claim that low-carbohydrate dieting is innocuous for cardiovascular disease risk.

To be clear, for many people, most markers of cardiovascular disease risk (including LDL cholesterol and CRP) will decrease on a low-carbohydrate diet if weight loss is robust enough (see, e.g., here and here). However, what the studies above suggest is that if weight is maintainable at these lower levels after switching to a higher-carbohydrate diet, on average these lipid and inflammatory markers will improve still further. A key unresolved question is whether the ability to maintain such weight loss on a low-carbohydrate diet sufficiently compensates for the increases in these cardiovascular disease markers, if long-term weight maintenance on a low-carbohydrate diet is preferred. I suspect that it is–and thus, for those who can only maintain their weight loss on a low-carbohydrate diet, the tradeoff may on balance be worthwhile. But this remains to be seen. (Also, this may depend on the extent of the weight loss. More long-term weight loss would more likely justify the tradeoff.)

A similar calculation might be at play for the weight-stable management of type 2 diabetes with a low-carbohydrate diet. Might the gains from better glycemic regulation sufficient to offset the losses to lipids and CRP? Study authors, fortunately, put this concern to rest, noting that for subjects with type 2 diabetes, no such increase in CRP has been observed:


That inflammatory biomarker increases are also not seen among diabetics might suggest that the reduction in blood glucose is sufficiently anti-inflammatory so as to obviate any pro-inflammatory downside of the ketogenic diet.

This brings us to APPROACH, the second of the two studies that is the subject of this article. The findings of APPROACH:

  1. Meat versus non-meat protein increased LDL cholesterol;
  2. There was no difference in LDL response between white and red meat;
  3. Saturated fat increased LDL cholesterol;
  4. These effects were all independent of each other: meat increased LDL cholesterol independent of the saturated fat content, and vice versa.

The design of the study:

“Generally healthy men and women, 21–65 y, body mass index 20–35 kg/m2, were randomly assigned to 1 of 2 parallel arms (high or low SFA) and within each, allocated to red meat, white meat, and nonmeat protein diets consumed for 4 wk each in random order. The primary outcomes were LDL cholesterol, apolipoproteinB (apoB), small+medium LDL particles, and total/high-density lipoprotein cholesterol.”

Which authors visualized as follows:

The diets were as follows:

Here is a sample menu:

The study was an outpatient study, and participants picked up the food weekly and were weighed at that time:

And here are the study’s main findings:

And here:

Just eyeballing the LDL findings from Table 3 (the first the two tables immediately above), we see:

High-SFA vs. low-SFA red meat: 2.64mM vs. 2.35mM—a 12% increase
High-SFA vs. low-SFA white meat: 2.61mM vs. 2.38mM—a 10% increase
High-SFA vs. low-SFA red meat: 2.46mM vs. 2.22mM—an 11% increase

Meat (averaged) vs. nonmeat, high-SFA: 2.63mM vs. 2.46mM—a 7% increase
Meat (averaged) vs. nonmeat, low-SFA: 2.37mM vs. 2.22mM—a 7% increase

High-SFA meat (averaged) vs. low-SFA nonmeat: 2.63mM vs. 2.22mM—an 18% increase

In other words, according to these data and assuming that they are generalizable to the population as a whole…

An average high-saturated fat, high-meat dieter would be expected to have an 18% higher LDL than a low-saturated fat, low-meat eater.

To put this in perspective, PCSK9 mutations, which reduce the ability of PCSK9 to degrade the LDL receptor, which causes greater uptake of LDL, are associated with a 28% reduction in lifetime LDL cholesterol and an 88% reduction in the risk of cardiovascular disease.


Can we estimate the precise reduction in coronary heart disease risk resulting from a lifetime of such LDL reduction? After all, lifetime exposure to LDL, cumulatively and in an area-under-the-curve-fashion is what causes cardiovascular disease. This is beautifully demonstrated in the following figure, which shows how lifetime, genetic reduction in LDL produces a steeper decline in risk than that of a reduction in cohort studies with a median follow-up to 12 years, which produces a steeper reduction in risk than that observed in randomized clinical trials with a median follow-up of 5 years:

We can estimate such lifetime risk, by looking more closely at a study similar to the one that provided the blue line in the figure immediately above.

In short, the reduction in risk from lifelong LDL reduction (from birth) has been estimated by looking at genetic mutations that produce a lower LDL-C level, and then looking at risk of coronary heart disease for people with each mutation. These were then plotted to produce a graph that can estimate the effect of lifelong reduction in risk of death from CHD.


Since the figures given in the APPROACH paper are in millimoles/liters, we need to convert to mg/dl to use the above graph to make estimates. That is done by using the conversion factor 38.67. We can therefore recalculate the comparisons between diets in mg/dL, with risk reduction estimates as follows:

High-SFA vs. low-SFA red meat: 102.1mg/dL vs. 90.9mg/dL—a 20% decrease in CHD risk
High-SFA vs. low-SFA white meat: 100.9mg/dL vs. 92.0mg/dL—a 15% decrease in CHD risk
High-SFA vs. low-SFA red meat: 95.1mg/dL vs. 85.8mg/dL—an 17% decrease in CHD risk

Meat (averaged) vs. nonmeat, high-SFA: 101.7mg/dL vs. 95.1mg/dL—an 11% decrease in CHD risk
Meat (averaged) vs. nonmeat, low-SFA: 91.6mg/dL vs. 85.8mg/dL—an 11% decrease in CHD risk

High-SFA meat (averaged) vs. low-SFA nonmeat: 101.7mg/dL vs. 85.8mg/dL—a 29% decrease in CHD risk

(Note: I do not have access to the formula used to generate this graph, so I eye-balled the graph to provide the above estimates. This “eye-balling” is within a few % of a formula-calculated estimate, and therefore adequate for illustration purposes.)

This is a huge reduction in risk. Given that 360,000 Americans died of coronary heart disease in 2016, a 29% decrease in risk would save the lives of 104,400 people each year, or more than 1 million Americans per decade. That is 34 September 11s per year. Such a reduction in LDL-C would reduce the rate of physical disability by a similar magnitude. The magnitude of this benefit, if applied universally across the population, would therefore be several-fold greater than universally prescribed statin therapy for all patients with dyslipidemia.

This does not take into account the impact of elevated LDL on other cardiovascular diseases, such as stroke, cerebrovascular disease, and cancer, which are likely to dramatically increase the magnitude of these rough estimates.

One objection to this kind of thinking is that such a dietary intervention may have pleiotropic effects: it may impact health in other ways than coronary heart disease. I am eager for readers to share well-designed studies that demonstrate such effects.

Another objection to this analysis is that the American diet is not likely to be precisely the same as that consumed in the high-SFA, high-meat group, thereby making the benefits less substantial than those I have presented. While this is true, even if the benefits are only half of what I have mentioned, they would be a major achievement of public health.

Another objection is that the benefits of a lifetime dietary approach achieving such LDL reductions overstate what would be achieved clinically, since as mentioned, lifetime exposure to LDL has cumulative effects, and dietary approaches adopted in, say, middle age are likely therefore to produce less benefit. This is true, but my goal was to demonstrate what a lifelong, public health oriented transformation could achieve. If we wanted to calculate what could be achieved clinically, we should look at the first of the two graphs, included here a second time for convenience:

In this case, the magnitude of effect on LDL of the difference between high-SFA, high-meat and low-SFA, low-meat is only about 0.4mmol/L (remember: 2.63mM vs. 2.22mM), which along the red line translates into a modest ~9% reduction. This is perhaps one of the main reasons why clinical trials into LDL lowering via diet have shown such modest and uncertain effects: changing the diet during late age simply will not produce benefits that are as substantial as optimizing the diet at birth.

Authors note that the selective increase in large LDL and not in medium or small LDL implies that the changes may be less atherogenic than might be expected by looking at total LDL cholesterol changes. They write:

This claim is controversial, as the authors themselves acknowledge. Lipid experts indeed seem to be getting increasingly heated on the subject online:

Others leap into the fray:

An earlier review in the American Journal of the College of Cardiology agrees, writing that “Cholesterol, largely transported through the body as LDL-C, has clearly been established as a causal agent in atherosclerosis over many decades of extensive research. Regardless of size, LDL particles are atherogenic.” Kjellmo’s paper, linked in the tweet thread above, concurs.

Hedging probably because of the current lack of clarity in the literature, Krauss and colleagues conclude their paper:

It is important to note, however, that medium LDL certainly increased on the meat diet, and that there was a nonsignificant increase in small LDL. Would a higher number of subjects produced more clarity on this question?

In any case, what is the cause of the higher LDL levels in the meat vs. nonmeat diets? One might suspect that, because the large LDL was the fraction raised the most and because dietary cholesterol predominantly raises the large LDL fraction, that the dietary cholesterol found in the meat was perhaps the culprit. However, Krauss et al are quick to point out that this probably isn’t quite right:

One guess is that the dietary fiber in the matrix of the plant proteins (such as peas) might have contributed to the effects.

It is worth noting that fiber was nearly equated on all diets, suggesting that adding additional fiber to the diet in the form of a higher fruit and vegetable intake may not be sufficient to counteract the lipid-raising effects of animal protein.

According to these studies, because low-carbohydrate diets high in meat and saturated fat are likely to, on average, raise both inflammatory and lipid biomarkers in healthy people–if avoiding atherosclerosis is the goal, then minimizing animal-sourced protein while maximizing other high-nutrient foods is likely to be best practice according to the totality of current evidence. There may be plausible arguments in favor of low-carbohydrate diets if they help to maintain a sufficient degree of weight loss to mitigate the inflammatory or lipid effects of these diets. Likewise, among persons with type 1 or type 2 diabetes, the euglycemic effects of low-carbohydrate diets may outweigh any negatives. However, all else equal, it may be appropriate to exercise caution about committing to low-carbohydrate diets if weight and glucose can be optimally maintained with other dietary strategies.

If you have enjoyed this post or my other work, please consider donating or becoming a patron. Your support enables me to continue spending my time doing this.


The first two parts of the macronutrient trend series, which explores the association between changes in carbohydrates and fat in the diet and the obesity epidemic in America, are here (Part 1) and here (Part 2).

In the first post of this series, I reposted a response of mine, involving a wealth of data from the USDA food balance dataset, to Dr. Ludwig’s claim that the low-fat diet caused the obesity epidemic. In the second post, I have reposted Dr. Ludwig’s response. This third post in the series is the final reply to Dr. Ludwig. Still deeper analyses will follow these first three posts.

Without further ado.

(Originally posted January 14, 2018 on Medium: https://medium.com/@kevinnbass/thank-you-for-the-thoughtful-response-dr-ludwig-b0db3314d658. Copied in full, with added figures in the text.)

Thank you for the thoughtful response Dr. Ludwig.

I agree that food availability data are problematic, for some of the reasons that I discussed and you pointed out as well. So I also included self-reported intake data, which I think told a similar story. Still, I agree this data, too, is flawed. My hope was that by using both data-sets, despite their limitations, at the least we can say what the data say. Obviously, reality could be different from the data. And if it is, we all want to know that and understand why that should be.

This means that the picture I have tried to draw could be qualified or even refuted if there was another data-set or interpretation and a compelling justification for why that data-set or interpretation is better. However, I am not persuaded that the data-set that has been offered as an alternative is better.


  1. Stephen and Wald use self-reported data and thus, as self-reported data, this data-set in principle has the same flaws as the NHANES data. Why should we use it over NHANES?
  2. Stephen and Wald’s set extends from 1920–1984, covering only 4 years after Guidelines were published.
  3. S&W’s data-set includes studies using heterogeneous assessment methods. This could be a weakness if each method had particular biases yet different methods were used more or less frequently over time.
  4. More importantly, I am not sure that the S&W data-set support the “low-fat” argument. First, the decline in fat intake seemed to have begun before the dietary guidelines were released. And second, fat intake was projected at around 31% in 1920, yet the 1920 obesity rate was dramatically lower than in 2000, when the fat intake was higher, around 33%. See: https://pbs.twimg.com/media/DTjWBwkV4AEle_T.jpg

5. The CDC data *are* the NHANES data. They show that % fat fell from ~37% to ~33% from 1980 to 2000. As we both must agree: this is around a 4% change. Again, as we also both agree, this decrease in % corresponded to ZERO change in intake. The % decrease happened because carb intake increased, but the driver here was more carbs. Total grams fat intake stayed constant from 1980 to 2000 according to the NHANES data. [Editorial: this point will be addressed in much greater depth in part 4. – KB]

6. Finally, I do not think that using USDA survey data for first data point in 1960 and combining it with the NHANES data for 2000 represents best statistical practice. I think comparisons between years should be made within data-sets, using the same methodology. If this is done, I believe that best estimate, barring a new analysis from a different data-set that covers the years in question (1980 to 2010), is that fat intake changed about 4%. Again, this is consistent between USDA and NHANES, using within-data-set comparisons.

Moving on, we agree that in, e.g. the 1980 DGA, the brochure recommended increasing carbs. But ONLY if one limited fats. Moreover, it advised eating complex carbohydrates, not refined, and advised against sugars. See screencap: https://pbs.twimg.com/media/DTjaOtCV4AAD0w1.jpg

And original source: https://health.gov/dietaryguidelines/1980thin.pdf

A diet rich in complex, unrefined carbohydrates is compatible with good health, as e.g. some of the macrobiotic diet trials for T2DM have recently shown (see my recent Twitter feed). But Americans took the message “carbohydrates,” and ignored the “complex” part.

The question should also be asked: Is it possible that Americans ate more refined carbohydrates for reasons quite apart from their (mis)understanding of the guidelines? I cannot answer this question, but I think it is a legitimate one.

Last, there is no disagreement that, in the context of the standard American diet, overconsuming refined carbohydrates is harmful to health, and that this is what Americans are doing.

We also agree that present low-fat recommendations should continue to be a subject of debate. Clearly we need to debate every aspect of the dietary guidelines. Again, my claim is only an historical one: that it is difficult to interpret the dietary trend data as one of decreasing fat, and thus, that it is difficult to blame the guidelines’ previous low fat recommendations for the obesity epidemic.

I remain open (and eager) to revise my positions. If we want to get policy right, clearly the facts need to be right too. I believe that doing this and making a positive impact on others’ lives is what both of us are primarily interested in achieving.

I thank you for the opportunity to have discussed this issue with a researcher of such a high level of scientific achievement and important cultural legacy. It is an honor.


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.

You can also find me on Twitter at @kevinnbass.


Other parts in this series:

Part 1

Part 2

In response to my first post, Dr. Ludwig responded with his own. I have recopied it here in full, with his permission. Additionally, for illustration, I have added figures from several of the papers that he references in the appropriate places in the text.

(Originally posted January 10, 2018 on Medium: https://medium.com/@davidludwigmd/hi-kevin-appreciate-the-scholarly-debate-58a6b7df1bff. Copied in full, with added figures in the text.)

Dr. Ludwig:

Hi Kevin, appreciate the scholarly debate. Just a few brief comments. First, to be clear, I said your statistics were misleading, not that you’re wrong.

Ok, so what’s misleading? First, food availability data (see the bottom left caption in most of your figures) do not accurately reflect what is actually consumed. Nor do added oils, the focus of your comments elsewhere, reflect total fat intake. We know that the food supply contains about 3,900 kcal per person. Though actual calorie intake has gone way up, it’s nowhere near that level. Also, we need to be careful of self-report data, which has much selective bias.

Admittedly, data on dietary intakes can be problematic, but the long term trends are clear. Stephen and Wald summarize data in the US during the 20th century, estimating that in 1960 the proportion of calories as fat was about 40%.

The CDC has good data that proportion of dietary fat decreased to about 33% by 2000 (ie, near the government recommended level). The 2 figures in that report tell the story at a glance.

Yes, fat consumption in gram amount has remained about the same, but that’s just the point. We’re eating more of everything — arguably in part due to being hungrier from all the highly processed carbohydrates. As you say: “according to NHANES data, carbohydrates went up dramatically” from 1980 to 2000, consistent with the government advice to eat a high-carbohydrate, grain-based diet (those 6 to 11 servings a day!).

Of note, the “American Paradox” of decreasing proportion of fat as obesity rates rose was described by Heini and Weinsier 20 years ago — though it’s really no paradox at all considering the metabolic effects of processed carbohydrates.

Indeed, multiple recent meta-analyses show that high carbohydrate diets as actually consumed are demonstrably inferior to all higher fat diets for weight loss. Unfortunately, this high carbohydrate intake is also strongly linked to mortality among US cohorts. [Click here for the most relevant table from this paper. — KB]

So let’s certainly continue exploring this, but I don’t think that concerns about the low-fat diet recommendations are based on myths that need killing.


Enjoy this article? Proceed to Part 3 of the macronutrient series, here [to be added].


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.

You can also find me on Twitter at @kevinnbass.

There is a narrative online that Americans have become obese because they have too closely followed the Dietary Guidelines. I looked at this suggestion in depth in my post here and found it to be lacking. This post will have a narrower focus and look specifically at refined grains and sugar.

So did the Dietary Guidelines cause Americans to eat more refined grains and sugar? Let’s look at the facts.

Americans currently eat about a quarter of the whole grains recommended by the current Dietary Guidelines for Americans, and about double the recommended refined grains.

Reference: https://health.gov/dietaryguidelines/2015/guidelines/chapter-2/a-closer-look-at-current-intakes-and-recommended-shifts/

If Americans eat too many refined grains, can it be the Guidelines’ fault?

Indeed, a 2010 study showed that only 1% of Americans eat the quantity of grains recommended by the dietary guidelines. One percent! Meanwhile, as we can see above, the average American far exceeds recommended intake of refined grains.

Here is a screenshot of the 2015-2020 Dietary Guidelines for Americans showing that it recommends at least half of grains consumed to be unrefined:

Reference: https://health.gov/dietaryguidelines/2015/resources/2015-2020_Dietary_Guidelines.pdf

Some claim that only current Guidelines included the recommendation to consume unrefined grains, and that the early Guidelines didn’t. This is false. The Dietary Guidelines for Americans from 1980 repeatedly emphasized eating unrefined carbohydrates:

Reference: https://health.gov/dietaryguidelines/1980thin.pdf
Reference: https://health.gov/dietaryguidelines/1980thin.pdf

Indeed, the entire 4th guideline was dedicated to recommending unrefined carbohydrate foods. Since there were only 7 guidelines, that means 14% of the 1980 DGA were dedicated to telling consumers to eat unrefined carbohydrates.

Here is the second and final page of the 4th guideline again:

Reference: https://health.gov/dietaryguidelines/1980thin.pdf

Its summary statement says: “Select foods which are good sources of fiber and starch, such as whole grain breads and cereals, fruits and vegetables, beans, peas, and nuts.”

Does it say to eat refined grains and sugar? Does it say to eat donuts and pizza? Does it say to eat ice cream and cupcakes? No. It says to eat foods that Americans still don’t eat very much of. Many people do not eat any.

On the topic of sugars, do the Dietary Guidelines tell people to eat a bunch of sugar to “replace fat”–as is often claimed? No. The Guidelines tell people to limit sugar. In fact, #5 of the 7 guidelines is dedicated to this! Here is its summary statement:


According to USDA food availability data, Americans today eat more refined grains AND sugar than they did in 1970. Americans have not followed the dietary guidelines. Americans have largely ignored them.

Reference: https://www.ers.usda.gov/amber-waves/2017/july/us-diets-still-out-of-balance-with-dietary-recommendations/
Reference: https://www.ers.usda.gov/amber-waves/2017/july/us-diets-still-out-of-balance-with-dietary-recommendations/

There is a large and thriving low-carbohydrate diet industry worth hundreds of millions of dollars. One of its major figureheads is @DietDoctor1, who spreads this kind of misinformation on his website, in a way strikingly similar to many anti-vax websites:

This is not a small website! It claims to have over half a million subscribers and runs a lucrative membership-only part of the website.

On this website, many of the above myths are propagated widely. For example, here is @bigfatsurprise‘s presentation on the topic:


In this video, @bigfatsurprise propagates many of the myths that I have debunked in this thread, in large part by slickly presenting data that superficially supports her point of view and excluding data that does not. Because there is so much data, it is very easy to do this.

Spreading these myths is highly lucrative to these authors and doctors. The video I posted is only a preview. The full version is only available to those who have purchased memberships.

I am not sure what to do about this, but I am working on it with others. It is important to be vocal and assertive. These people are exploiting and generating confusion about nutrition to make millions. People deserve better than this.

(For more information about the carbohydrate and fat trends and their relationship to the dietary guidelines, please refer to the first half of my post The data overwhelmingly indicate that Americans do not follow the Dietary Guidelines.)


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.

Stephen Phinney and Jeff Volek are among the most important minds of the low-carb movement. With three advanced degrees between them (Volek is a PhD and Phinney has both an MD and PhD) and hundreds of scientific publications on low-carbohydrate diets going back decades, after almost 10 years of low-carbohydrate dieting under my own belt, last year I was excited to sit down and read their book The Art and Science of Low-Carbohydrate Living, their popular primer on low-carbohydrate diets and, after having read almost everything published on low-carbohydrate diets in other popular books, really deepen my understanding.

Looking for a scientific treatment of the strengths, weaknesses, and gaps in the research record on low-carbohydrate diets, as I read I was first a little disappointed by the book. Then I was appalled. The first two chapters called into question everything that followed. This was a turning point for me in understanding low-carbohydrate diets and the people who advocate for them.

Phinney and Volek engage in rampant conspiracy mongering. I wanted to like the book. But I couldn’t. The content of this book and its worldview, in my opinion, so poisons the well of scientific discussion that it makes any serious adherent to the book immune to rational argument.

It is said that a great piece of literature raises more questions than answers. I believe the same could also be said of a great piece of science. It is also said that a work is not literature if it only provides answers–it is propaganda. By this definition, Phinney and Volek’s book is propaganda. It needn’t be, because I know that there remain many questions about low-carbohydrate diets.

Here is my thesis:

I believe this is a poisonous book. By believing that the other side is responsible for a conspiracy, one is justified in one’s own selective and misleading use of scientific research. I will now try to demonstrate what I mean by this.

These first two chapters are about indigenous groups that consume low-carbohydrate diets. Here is the key introductory paragraph to these paragraphs:

Before commenting, I should introduce my background. I have two bachelor’s degrees. One was my bachelor’s of sciences degree in biology, but I also have a bachelor’s of arts degree in anthropology. I decided to major in anthropology in part because after high school, I read Loren Cordain’s book The Paleo Diet, and this changed my life. I was Paleo. I was also an angry kid, and I thought that I wanted to learn about hunter-gatherers, because modern society was a bum deal.

So, as an anthropology major, imagine my perpetual surprise when I hear that hunter-gatherers were adapted to consuming low-carbohydrate diets. Just as with Phinney and Volek’s (hereafter, PV) claim that this was the case, this is never referenced. The reason it is never referenced is because it isn’t true. Every dataset that looks at hunter-gatherer diets–and there are several mainstays of anthropologists over the past century, employing varying methodologies–shows a range of carbohydrate intakes.

PV indicate that these three groups are “examples” of low-carbohydrate hunter-gatherer groups, while in fact they exceptions. Of the hundreds of hunter-gatherer groups observed, these are the only groups that might provide evidence for PV’s thesis.

Also the notion that Homo sapiens lived in barren or temperate regions until very recently–including during the Ice Ages–also is not supported by any evidence that I am aware of. When the Ice Ages occurred in Europe, Homo sapiens retreated to the Mediterranean:

They only recolonized Central and Northern Europe relatively recently. Indeed, Scandinavia was only colonized by humans at around 12,000 BCE–an eyeblink in human evolutionary history. Indeed, the lifeways of Scandinavia are only as old as agriculture itself.

The famous Inuit? Same deal. According to the book “A Paleohistory of the North: Human Settlement of the Higher Latitudes”, Alaska too was only colonized around 12,000 BCE, with the Inuit themselves arising on the scene only 2,000 years ago:

The Inuit arrived in Alaska after the fall of Attica at the hands of the Peloponnesians, after the deaths of Socrates, Plato, Aristotle, Alexander the Great, and Julius Caesar and during the reign, approximately, of Augustus in Rome. There was a reason indeed that the great civilizations began in Southern Europe: everyone else was just getting started. The existence of humans in the Northern climates is, to be clear, neolithic and in some cases almost modern in its novelty.

In point of fact, modern genetic studies show that modern Inuit (a population from northern Alaska) spread through the Arctic less than 700 years ago, genetically and culturally replacing the Paleo-Eskimos, residents of the Arctic for about 4000 years. What this means in turn is that the Inuit were just getting settled in when the Renaissance was underway in Europe. Yet PV want to propose that the Inuit are an ancestral population!!!

When PV were imagining our ancestors chasing around woolly mammoths, they were probably actually eating pasta with Francesco somewhere in the South. OK, minus the pasta. But you get the point. What PV have done in this passage is to pretend that low-carbohydrate living in these harsh climates was the normal human lifeway. The Ice Age was a recent event, and humans had a very ambivalent relationship to it, with permanent settlements largely only existing in the South.

What this means is that the Inuit, Masai, and Bison People were not necessarily representative of our human ancestors. In fact, they were chosen by PV precisely because they stand out as exceptions, even as PV propose they represent the rule. And they aren’t even exactly what they seem either. We have only seen this in the case of the Inuit, but we shall soon see it with the Masai as well.

Yet, through a bit of hand-waving, they try to make this work:

Again, no reference is provided to support that most of the world’s cultures survived on low-carbohydrate diets–in fact, this is at odds with the available evidence, which shows copious dependence of our ancestors on a wide variety of foods, including fruit, tubers, legumes, and grains.

PV’s discussion of Ireland is unreferenced and just wrong: archaeologists believe that cereals, including wheat and barley, arrived in Ireland almost 7,000 years ago and were extensively cultivated.

The discussion on Scandinavia and Russia is equally unreferenced. It is difficult to know what to take seriously. Even if true, the amount of gene flow because of repeated migrations, especially in Scandinavia from mainland Europe during the course of the neolithic (except perhaps for Finland) would substantially undermine the argument.

Yet behind these attempts at justification lingers a rather curious value judgment: “low carbohydrate cultures were suppressed by the agricultural imperative.” A whiff of Jared Diamond and Marshall Sahlins and an insinuation of romanticization of hunter-gatherers so often latent among Paleo diet writers makes itself felt here. There is much more of this in the book, but we shall have to pass it by for our purposes at this time.

PV, perhaps sensing that their evidence is paper thin, now appeal to a supposed gap in evidence:

In other words, because writing didn’t exist, we don’t know that just because the evidence is so thin, these supposed low-carbohydrate cultures didn’t exist. Here PV make an incredible claim: >99% of ethnographic observers misreported their findings. The ones who told the truth are the ones that PV is reporting on. What a convenient and ridiculous remark.

PV give no evidence that this happened, content instead to accuse most observers of faking data. Still, if only an empirically unfounded claim, we might chalk it up to (serious) disagreements about interpretation. But it isn’t. It also simply doesn’t make any sense.

Here’s why. Ethnocentricity is a commonplace pitfall that most ethnographic observers are aware of. This is why it is so controversial when they report on infanticide, or on cannibalism: the observers might be biased, and trying to paint HGs are “primitives” or “savages.” Indeed, there is a (dominant) school of thought within anthropology that asserts that there is no such thing as an objective cross-cultural observer, precisely because such observers will always distort their analysis with their own biases. Even perception is biased.

But report they do. Outside the West, we know from cross-cultural observers that infanticide is a cultural universal. And we know that in diverse cultures, Bob is frequently offered as a blood sacrifice, or even taken as the main course on festive occasions. (Sorry Bob.) And this kind of reporting goes back to Herodotus, perhaps the most ethnocentric and biased cross-cultural observer in the Western canon. Indeed, most scholars now believe that Herodotus way over-reported differences.

Why? In part because they’re interesting. That’s why cross-cultural observers are interested in other cultures. They want to find differences. Through these differences, in turn, it is frequently argued that cross-cultural observers want to understand themselves.

That is why I studied anthropology: because I wanted to know the range of possibilities of human existence. Because I wanted to learn about something different. That is one of if not the major motivation of cross-cultural observation.

“Primitives” through Western history always been a foreign Other, an Other that contrasts with modern civilization; “primitives” tell us by their negative example who we are. Thus, the tendency in anthropology has been to exaggerate differences too much.

We need to ask, therefore, why it has been consistently OK to talk about eating Bob but not about restricting carbohydrates? About bizarre sexual rituals but not about eating only meat? About pagan idol worship, but not about foregoing grain?

Before proceeding, let us take a brief look at the paragraph that comes after PV accuse 99% of cross-cultural observers of fabricating data, because finding other human societies who don’t eat wheat would be too much existentially to bear.

Here PV, thinking they are supporting what they are saying, provide a perfect counterargument to their own argument. According to PV, when observers encounter HGs that do not live in houses, they report this accurately, and call them derogatory things like “uncivilized” or “unsettled”. Yet when they encounter HGs that do not eat carbs, they fabricate data.

This simply doesn’t make sense. It is like PV are trying to refute themselves.

At this point I started to become incredulous. Is it really possible that PV are truly the excellent minds that many claim them to be? Or is this a case of “hugely overrated”?

PV’s attempt to make the argument that cross-cultural observers couldn’t stomach low carbohydrate HGs demonstrates a lack of familiarity with the ethnographic literature, and the anthropological tradition. And just plain bad reasoning skills. What is most striking however is the lack of self-consciousness about this terrible way of arguing, yet the confidence of PV’s prose.

It’s questionable moreover whether PV’s examples serve their own argument. Inuit are widely understood to have rare genetic mutations that prevent them from entering ketosis, the very state that PV claim is ideal and evolutionarily normal. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4225582/

Also, as Evelyn Carbsane has pointed out, Masai women demonstrably do not eat low carbohydrate diets. Image below.

Original ref: https://books.google.com/books?id=T7aaAAAAIAAJ&pg=PA11#v=onepage&q&f=false
Evelyn’s post: http://carbsanity.blogspot.com/2015/03/masai-women-ate-low-fat-diet.html

What are the implications? Hyuge. This means that if we want to make the argument that Masai are low carb adapted, it’s only the men, but not the women. Yet… that’s not how genetics work.

Probably knowing that just discarding the evidence wouldn’t be sufficient, they accuse observers of not living with the people that they report on. Excerpt:

This is false. There are sections of libraries filled with books on HG populations by people who lived with them. Yet again we see PV accusing their opponents not just of dishonesty but now of laziness. Worst of all, in order to do this, they have to make things up.

In order to dismiss archaeological evidence as well, PV conclude that plant foods were probably not eaten or fed to dogs:

10/10 intellectual gymnastics. And no, there is not “some” data that dispute “these proportions”. Rather, virtually all available data do.

Let’s unpack this. First, they say “not all that is written of hunters and nomadic shepherds is incorrect.” No, they only mean to impugn everything that is in conflict with their theory. Which brings us to the next point.

What do PV mean when they say “when assessed against a modern understanding of metabolism”? Just what it sounds like. They cherry-pick the evidence that is consistent with their theory (although not really).

As for a “sparse but useful truth”—whether simple or complex, I prefer my truth unqualified, and not requiring in order to function, a vast intellectual conspiracy over hundreds of years to hide data by thousands of investigators.

If this post seems to drip venom, that’s because reading these chapters really did piss me off. And it makes me wonder, if the evidence from archaeology is so readily dismissed when it conflicts with their ideas, what would keep PV from doing the same when evidence from the own fields conflicts with their assumptions?

This is why I do not trust or value the work of PV. That is not to say that it cannot be valuable. It can be. But I question whether they are objective interpreters of their own science, and I now pay particular scrutiny when evaluating work on which either of them are an author.


As an MD/PhD student, my passion is for communicating the cutting edge of medical science and fighting misinformation. If this post is of use to you, please consider donating to my Patreon account. Your contribution will make a significant positive impact, and I will be greatly personally appreciative.

You can sign up as a patron at my page, here.

You can also find me on Twitter at @kevinnbass.