How Much Running Is Too Much? Critique of a New Research Study

This month, a new research study was published that examined the relationship between changes in running volume and injury rates.

For runners, this is a terribly important topic. A lot of discussion and advice, especially for newer runners, revolves around how much running is appropriate … and how much is too much. The general consensus is that too much running is bad, and staying underneath that threshold is important for longevity.

So when a study by Frandsen et al in the British Journal of Sports Medicine purported to make some novel, significant findings, it piqued my interest. I first saw this study shared in Brady Holmer’s Run Long, Run Healthy newsletter, and later I saw it shared on Reddit.

But when I took a closer look at the study – “How much running is too much? Identifying high-risk running sessions in a 5200-person cohort study” – I wasn’t so convinced that they had discovered anything useful. And I think some of their conclusions are questionable – especially if you take them at face value and use them to guide your own training.

Let’s run down what the study examined, what they concluded, and what questions you should be asking.

The Study Methodology in Short

This research study examined the relationship between changes in running volume and onset of injury.

Their dataset included 5,200 runners, recruited by Garmin, who agreed to share their data over the course of 18 months for research purposes.

The group was about 80% male and 20% female. Their years of experience ran the gamut, and the overwhelming majority started running between ages 15 and 45 (with 30-39 being the most common starting ages). The vast majority ran 2-4 times per week, and less than 7% ran 6+ times per week. About half had run a marathon or longer, while about a quarter had run between a half marathon and a marathon.

They identified three ways to track change in running volume. First, they looked at the difference between a run’s length and the longest run in the past 30 days (single-session change). Then, they looked at the acute to chronic workload ratio and the ratio of one week to the previous week. They bucketed changes into groups – less than 1% increase, 1% to 10% increase, 10% to 30% increase, and 30% to 100% increase.

Finally, they classified injuries in one of two categories – overuse injuries and traumatic injuries. The focus of the study was on overuse injuries, because these are most often associated with an increase in volume.

Note that they defined an injury – and required a runner to self-report that injury – as something that was “painful and irritating, leading to a reduction in running activity.” A “problem” – which wasn’t considered an injury included was something less severe. It could still be painful and irritating, but it didn’t result in a reduction in running activity.

The Findings of the Study

The summary of the study includes two bulleted conclusions:

  1. Session-specific running distances that are >= 10% of the longest run undertaken in the prior 30 days significantly increase the risk of lower extremity injury.
  2. Caution is advised when relying on recommended training load calculations such as the acute:chronic workload ratio and weekly-gradual changes, as no association, or even inverse associations, between these approaches and injury risk was found.

The first conclusion, in short, claims that single session changes in distance are more likely to result in injury. When an individual run increases beyond the longest run in the past 30 days, injuries become more and more likely.

In the discussion, the authors go on to state the following:

Runners should avoid running a distance in their current session that exceeds 10% of the longest distance covered in the previous 30 days. However, progressions up to 10% are not necessarily safe either and carry a degree of risk. Although not statistically significant, a progression between 1% and 10% translated into an increased rate of 19%.

The second conclusion is based on the absence of a relationship between the acute to chronic (A:C) workload ratio or week to week changes and injury rates. The data showed, somewhat counter-intuitively, that an increase in either of those measures was correlated with a lower rate of injury.

The Author’s Self-Identified Strengths and Weaknesses

The study includes a section on self-identified strengths and limitations. When you read a headline, these often get lost, so I want to take a moment to emphasize these before I get to my own critique.

The strengths include:

  1. The sample size. 5,200 is a large cohort, and no (?) other studies have used anywhere near this many runners.
  2. Ongoing follow up with participants, which enabled researchers to connect injuries with specific moments in training.

I think the first one is important. It’s rare to see an exercise science study with a large sample, so this is awesome. But large datasets come with their own complications – as evidenced by the limitations.

The limitations include:

  1. Validity of self-reported outcomes. Runners reported their injuries, not medical professionals.
  2. The use of distance as the primary variable and the binning of that variable to create groups.
  3. The use of observational data to draw conclusions about causality.

I’ll revisit some of these limitations below – but I just wanted to point out that the authors do acknowledge them. And these are, to some extent, driven by the fact that this is a large sample. While these limitations could be mitigated with a better statistical analysis, it’s inherently difficult to tease out clear conclusions from large datasets.

My First Impressions of the Findings

Without getting too detailed in the critique, here are my first impressions of the findings.

On the one hand, the finding that a sudden increase in maximum mileage per session leads to an increase in injury rate makes sense. If you’ve never run more than 10 miles, and you suddenly run 16 miles, you’re going to risk over-doing it.

Obviously, an increase isn’t necessarily indicative of an injury. You can increase and not get injured. But the general conclusion that a greater increase leads to a greater likelihood of injury passes the smell test.

On the other hand, the finding that a week to week increase in mileage leads to a decrease in injury rates … doesn’t make sense. It just doesn’t pass the smell test.

This screams of mis-measurement to me. I’m sure that the data, as collected and analyzed, support this conclusion. I don’t think the authors lied or misrepresented anything.

But I think they asked the wrong questions or allowed confounding variables to drive the correlation. The idea that a) increasing weekly mileage leads to a reduction in injury rates and b) you shouldn’t be worried about increases in weekly mileage is just asinine. And this conclusion completely ignores their self-stated (and quite obvious) limitation that correlation does not indicate causation.

A More Thorough Critique of the Methodology

I’ve read over the study a few times now, and I wish I had access to the raw data. This sounds like an amazing dataset, and if I could re-do the statistical analysis, I think it could help satisfy a few of my questions and critiques.

But I don’t have access to the data, so the best I can do is raise these questions – and caution anyone that is thinking about making training decisions based on these findings.

First, consider the sample. While many of these runners have been running a long time, I’d classify them overall as casual runners. The majority of them run three or fewer times per week. This isn’t a judgement on them as runners – but it calls into question who these conclusions can be applied to. This sample is more representative of the runner attempting their first marathon or the occasional marathoner who ramps up once a year than the runner who is training consistently and considering increasing their volume for their next training cycle.

Second, runners are only classified by what they’ve done in the past 30 days. There’s no way to distinguish between runners who are breaking new ground and those who have taken some downtime after a previous race (or injury) and are returning to running. Both of those groups would be interested in specific guidance -but I’m not so sure that the guidance would be the same. Anyone who’s trained at 70+ miles per week knows that they can taper, race, take time off, and return to high mileage much faster than someone who is approaching 70 miles per week for the first time.

Third, the methodology connects an injury to the most recent run – but this may obfuscate the relationships. If an injury isn’t traumatic, it may take a day or two to surface. Let’s say you go for a long run on Sunday – a 14 mile run after a previous max of 12 miles. That’s a 14% increase. But if you feel ok Sunday night and you only notice an injury following Monday or Tuesday’s run, that injury is going to be associated with a shorter run.

I like the question (are injuries associated with single session increases?), but the methodology makes it possible (likely?) that many of those injuries will be associated with shorter runs.

A specific recommendation would be to conduct the analysis again – but this time, compare the longest run of a week to the longest run of the past 30 days. I think this would reinforce the conclusion that bigger jumps in single-session maximum distances lead to increased rates of injury.

My Biggest Complaints About This Study

But here are my biggest complaints about this study.

I think a major failing is the way that they explore long term changes – and claim to make conclusions about it. They don’t articulate any kind of theory that provides a potential explanation for the phenomena they found – and at the end of the day, data-driven conclusions are questionable if they don’t have a plausible explanation.

It just doesn’t make any sense. What this is literally saying is that if I increase my volume by more than 30%, I have a lower chance of injury than if I keep my volume the same. Again, make it make sense.

Consider what circumstances would lead to an acute to chronic workload ratio of over 30%. This is pretty rare, unless you take a week off (for vacation) or a few weeks down (tapering for a race). For example, in the last five weeks, I’ve I’ve run 70, 60, 75, 55, and 42 miles. The last two are low because I was on vacation.

Next week, I plan to run 75 miles, followed by 80 miles, as I ramp up for fall marathon training. My A:C workload ratio will be 130% for a couple of weeks due to those vacation weeks – but 75-80 mpw is right in line with what I’ve been running for months before that.

In this case, there’s a reason that a high A:C ratio is unlikely to lead to injury. And that, simply put, is because the A:C ratio is artificially high because of weeks that were lower than usual. Frankly, you aren’t going to find many examples of people with huge increases in load that aren’t a result of a previous downtime.

On the flip side, the runners who are actually increasing their volume over the long term – and more likely to succumb to overuse injuries – are those who are probably increasing by smaller amounts week over week. So smaller weekly increases could lead to higher injury rates. But there’s nothing in the methodology to identify runners who are continuously increasing verse those who only increase a little bit here and there.

Finally, there’s no consideration of the intensity of the run. As the authors note in their limitations sections, they are looking only at mileage. And if a runner actually keeps their intensity low, it’s less likely that week to week increases are going to lead to injuries. Another possible outcome is that big increases in volume are also associated with reduced intensity – because that’s smart training – and that leads to fewer injuries. While smaller increases in volume are associated with maintained or increased intensity – because that’s what novice runners do – and that results in more injuries.

A measurement of workload that combined both mileage and intensity would result in a broader range of outcomes, and this could also lead to a better correlation of problematic training to injuries. Ignoring intensity introduces a lot of ambiguity, and that confounds the correlation.

What’s the Bottom Line On This Study?

At the end of the day, this study adds to the research base about running volume and injuries. And in that sense, this is a good thing.

But I think this paper is more suited to raising additional questions for research than providing specific, actionable advice to runners.

The fact that their second conclusion flies in the face of all coaching advice – that managing week to week increases in volume is important for mitigating injury risk – should raise huge red flags about whether this is ready for prime time. And the fact that the sample is skewed towards occasional runners also suggests that more seriously committed runners should think twice before drawing any firm conclusions.

That being said, it raises some interesting questions. Whether it’s this dataset or another one, it’s worth digging deeper into the question of single session increases. And I think there other ways to quantify chronic workload changes that could lead to more useful conclusions. And if this research further breaks the population down into more casual and more serious runners, that will just increase its usefulness.

In the meantime, here are three general reminders:

  1. Correlation does not imply causation. If a study shows a statistical relationship but it isn’t based on an actual experiment, think twice before using the conclusions to guide your training.
  2. There’s more than meets the eye. Any time you reduce something to one or two factors, you ignore the complex relationship of other factors.
  3. If a researcher cannot articulate a plausible explanation for their findings … take it with a huge grain of salt. Especially when that research runs counter to traditional wisdom.

What are your thoughts about this research study? Is there anything you wish they’d explore more deeply?

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.