Every so often, I read a study’s “Methods” section and just get sleepy. It’s not because the methods are boring. I am one of those freaks that gets off on a good research design. Instead, it’s because the study design is so intricate that carrying it out seems like TOO MUCH WORK.
And then, it happens. My eyes start to droop as I get sleepy on behalf of the researchers, who must have spent countless hours to make it all fit together. Thinking about their effort is like melatonin for my soul. But today I’m going to be talking about a study with a design that is delightfully EXHAUSTING.
The August 2022 study in Medicine & Science in Sports & Exercise, from a research team led by Olli-Pekka Nuuttila, started with a hard question: how can you evaluate the effectiveness of individualized, evolving training programs in a multiple-group control study?
The researchers swung for the fences with 40 participants over a 15-week intervention, with constant monitoring and changes to protocols, and reams of input and output metrics. I am so damn excited to tell you what they found. But first, I must nap on their behalf.
The big problem is that training is not a simple math equation.
A 10-mile run at 145 average heart rate for an athlete that does 50 miles per week could be a productive aerobic stimulus. Or, if something is even slightly (sometimes imperceptibly) off, from nutrition to stress to Mercury being in retrograde, that same run could be somewhere from neutral to aggressively unproductive.
The problem is exacerbated by higher-stress sessions like workouts and long runs, where athletes pushing their limits might be on the brink of disaster consistency throughout a cycle. Training can sometimes seem like taking steps toward the edge of a cliff, but with a blindfold on. And, oh yeah, the cliff sometimes moves in ways that can seem random.
To consider the variables at play, I love the simple equation: Stress plus rest equals adaptation.
But that equation only works as a thought exercise, since “stress,” “rest,” and “adaptation” are all their own multivariable calculus problems. I can deal with the unknowns that we can quantify – think nervous system responses that can be approximated using heart rate variability (HRV) or heart rate zones that just require a few tests. But what keeps me up at night in coaching are the unknown unknowns – the variables that we aren’t even tracking because there are no ways to track them, the dark energy of the adaptation equation.
What keeps me up at night in coaching are the unknown unknowns – the variables that we aren’t even tracking because there are no ways to track them, the dark energy of the adaptation equation.
The study addressed that complexity head-on in an ambitious design. Twenty men and twenty women participated, all with running backgrounds. Ten of those participants dropped out before the completion of the study due to illness, life circumstances, injuries, or lack of training adherence. The participants were matched into pairs based on sex, endurance performance, and training volume, and those pairs were split into one of two groups:
- Predefined training (a set training plan)
- Individualized training (an evolving training plan based on recovery data)
Both of the groups underwent 3 different training periods:
- Preparatory: 3 weeks of familiarization with intensity zones and training modes. The preparatory period consisted of low intensity training, with one moderate training session each week, and a 25% reduction of training in the last week to prepare for baseline testing.
- Volume: 6 weeks of increasing low-intensity training, with one moderate sustained training session each week. Participants did 2 loading weeks with 10% increases, followed by a de-loading week with a 25% reduction.
- Interval: 6 weeks with intense workout sessions of 6 x 3 minutes fast/2 minutes easy, plus low intensity running. Participants again did 2 loading weeks with 3 harder workouts per week, followed by a de-loading week with 1 harder workout and a 25% volume reduction.
Now here is where the study gets really cool.
Both the predefined group and the individualized group started with the same general structure. Twice per week in the individualized group, training levels were increased or decreased based on nocturnal HRV (with a sensitivity of 0.5 standard deviations above or below the 4-week rolling average), subjective evaluation of muscle soreness (greater than 5 on a 1-7 scale), and how heart rate changed based on running speed (greater than 3-4 beats per minute increase at the same running speed). Before and after the interventions, participants underwent blood tests, a 10km time trial, and incremental treadmill tests.
Let’s take a step back before getting to the results.
What do you think happens to adaptation, as measured by speed and physiological metrics? If I was pressed prior to reading the study, I would guess that the strongest influence would be on injury rates. And if there was an influence on running fitness, it would be strongly correlated with whichever group had increases in volume/intensity. My reasoning is that studies usually find that more training stimulus equals more fitness. And fatigue is just a rock you have to jump over on the trail to fitness, right?
Wrong! The combination of objective and subjective measures of fatigue seems to solve for some of the unknowns in the adaptation equation. Get ready for some extremely fun results.
To start, there were no significant differences in training between the groups. In fact, the predefined group had a non-significant 20-minute average higher weekly total in the Preparatory and Volume periods! There goes my guess that whichever group trained more would have more success. There was also no significant difference in the quantity of intervals, although there was a greater range for the individualized group (indicative of increasing or decreasing training levels based on the recovery metrics). In total, 55% of individualized training athletes maintained training load, 35% increased, and 10% decreased.
So training levels didn’t vary a ton between groups. Yeah, some in the individualized group were thirsty buffalos, soaking up the training and adding more. And some were already soggy as hell, decreasing training accordingly. But most just kept with the program, only making minor changes. Maybe outcomes wouldn’t change much either?
Again, wrong! I have read the study multiple times and I still get blindsided by what comes next.
Both groups started in the same place, and both groups improved. That’s good news, because we don’t want the predefined group to get screwed just because of the randomization process. But the individualized group improved their 10k performance 2 times more (6.2% versus 2.9%). There was a non-significant difference between the groups in the max treadmill test (4% versus 3%).
However, the rate of response tells the most important part of the story. The researchers broke it down into individual response magnitudes to get away from the averages and focus on each data point. I love this step, since every N=1 training experiment feels like the world to the person doing it. We are not data points working from the same genetics. We are individuals with unique physiology, and I wish every study tried analyses like these.
Let’s start with the treadmill test. The next chart summarizes responses into high improvements, moderate improvements, and trivial changes.
The individualized group had zero non-responders! That is staggering in an exercise physiology study. That finding was backed up by the 10k time trial results.
81% were high responders for individualized training! Or to put it another way, 77% of athletes in the predefined training group had a moderate or worse improvement in their 10k times, with just 19% of the individualized training group falling into that category.
Responders v. Non-Responders
I love the responder/non-responder analysis because it gets to the heart of a difficulty in both exercise physiology and in coaching. Some athletes are primed to adapt, whether due to genetics or background. Those athletes can take almost any intervention and progress, their physiologies buffering any excess stress in harder plans, or soaking up the smallest amounts of stress in easier plans. I often feel like those athletes could be coached by a random number generator, at least for the initial adaptation period captured by this study.
We are not data points working from the same genetics. We are individuals with unique physiology, and I wish every study tried analyses like these.
Sometimes, those responders can skew the statistical analysis, particularly when it involves averages. Digging into data from studies, you’ll often see a scatterplot where most points are around “little change,” with a few dots at the top of the chart in the positive direction. While there are statistical methods to deal with that and provide a fuller picture, I am most interested in what can make everyone improve, not just the hyper-responders.
This study found that a combination of objective and subjective metrics prevented the non-response and increased the high responses. In other words, tuning into the body’s signals and constantly adjusting based on those signals seems to summarize the unknown variables in the adaptation equation well enough to approximate a solution.
That’s backed up by a 2017 study in the Journal of Physiology, which found that non-responders just needed more training. That study had 5 groups doing 1, 2, 3, 4, or 5 training sessions per week for 6 weeks. The rates of non-response were highest in groups 1, 2, and 3, as expected. Subsequently, those non-responders were given another 2 sessions weekly above their baseline, and all instances of non-response were eliminated. In harder training plans, non-response can be driven by excess stress, and reducing training may be necessary to induce adaptations.
There were some really cool differences between the groups.
The athletes doing individualized training took down periods when dictated by their bodies, which led to a less set structure for steps back. That could indicate that athletes should save down weeks for when they are needed, rather than pre-programming them.
The athletes doing individualized training also had a greater improvement in interval speed. They were executing their workouts better, likely due to higher rates of under-recovery in the predefined plan group. All athletes should make sure that training is structured to show up to key sessions feeling good, and ready to improve.
On our podcast last week, we talked to Kilian Jornet about why he does 58% of his training in Zone 1, and he said it was a shift he made a few years ago to support workout quality. Interestingly, he uses similar metrics to guide evaluation of his training readiness, like HRV and muscle soreness.
All athletes need to be honed in on their recovery, adjusting training accordingly. HRV shows really exciting potential to help guide that process. This study used nocturnal HRV, and most wearables are returning reliable data via wrist-based monitoring, with improvements all the time. Instead of using the recovery score from a wearable (which probably isn’t relevant for advanced athletes), consider charting the HRV yourself or using smart algorithms designed for this purpose like HRV for Training by Marco Altini.
Checking in on heart rate during exercise can be helpful as well. An unexplained increase in heart rate relative to baseline may indicate a need to dial back, or that your headphones started playing Ludacris, indicating a need to throw some ‘bows.
The study reevaluated training 2 times per week, which may be a good timeline to consider rather than making daily changes. Over time in coaching, I have learned to be a bit less responsive to daily fluctuations in fatigue unless they are accompanied by sickness or major life stress, since sometimes it can be hard to separate the signal from the noise.
Finally, pay attention to subjective feelings of fatigue and readiness. No matter how much tech we have, the adaptation equation will probably always remain partially unsolved. We just don’t have access to all the input variables we would need. But each of us does have access to a supercomputer that can help–and it’s between our ears.
The brain incorporates hundreds of variables to answer a simple question: how do you feel today? Listen to the answer, and try to create an objective system to analyze that data. If you have a coach, tell them how you feel every day (and make sure your coach doesn’t just give you a predefined plan). If you are self-coached, record how you feel and track patterns and how they evolve in response to different stimuli.
I think this study shows that every athlete can be a training responder. And we all have a supercomputer that makes it possible–it might just need some programming.
David Roche partners with runners of all abilities through his coaching service, Some Work, All Play. With Megan Roche, M.D., he hosts the Some Work, All Play podcast on running (and other things), and they answer training questions in a bonus podcast and newsletter on their Patreon page starting at $5 a month.