Once we have this, we can begin building our Banister model to come up with the best fit of load and performance.

Here we'll be adjusting the CTL and ATL constants to give us the best fit, for that athlete, between training load and performance.

We start by initializing an empty CTL and ATL list with starting values for each.

Then we roll though each TSS, we calculate CTL and ATL for that day and we have the model solve the CTL and ATL constants that best fit (params[3] and params[4] resp)

The model comes up with a prediction (Banister_Prediction) and the error between the prediction and the actual performance is added to the list of losses.

The average of all the losses is calculated and returned.

The solver then tries different k1, k2, CTL constants and ATL constants until it comes upon the best solution (with the lowest error).

These values are then returned in the list at the bottom of the pic.

These values represent the best model fit of load and performance for that athlete and can then be used to update the values in TP so that the PMC actually matches the athlete's performance.

/fin

Was notified that I may have pulled up a little early in translating the model fit to actual performance. So, here's a little more 😊

Once you have the fitted model parameters, you can update the CTL and ATL constants from the TrainingPeaks default (45, 7).

This has a dramatic effect on how well the model matches reality!

In the #Cycling #PerformanceModel above, you'll see how the default (45,7) parameters compare with our updated parameters for actual numbers from his aerobic tests.

With the lower fatigue constant used as the default, it predicts performance to swing around much more than it does in real life.

Sticking with the defaults and not tailoring the numbers to the athlete can therefore, often, lead to a situation where the athlete feels that the PMC doesn't match reality.

However, often, this is more the case that the PMC hasn't been properly 'fit' to the athlete.

When we update the CTL and ATL constants to match reality & we add the k1 and k2 coefficients back into the equation, we have a pretty decent start point for a individualized performance model.

/fin (for real this time 😊 )

For those interested in calibrating your own PMC to your actual TrainingPeaks data, I put together a Colab notebook.

All you need to do is..

- Save a copy of the Colab notebook with all the #Python #Code

- Export your workouts.csv from TrainingPeaks

- Upload it to the little file icon on the left of your Colab notebook

- Hit the play button!

https://colab.research.google.com/drive/12urv907Fy-Vc1sA7AliHTLTcAHpwYhYt?usp=sharing

Google Colaboratory

That's it!

The #code will spit out your actual CTL & ATL constants that you can then plug into your PMC

along with individualized
- k1,
- k2
- P0

that you can plug back into the #VO2max calc, allowing you to predict actual performance level given a CTL/ATL.

@alan_couzens Thanks for sharing. That's helpful. At one point I contemplated rolling the code for it, and then ... didn't.

@alan_couzens
Many thanks Alan. I really appreciate you taking the time to put that code up.

I did find a small hurdle. I had to format the Workout Day column in the Excel file (I chose the 14/03/2012 option).

@Rohan Thanks Rohan. That's interesting. I didn't have to do that with my csv download. Hmm. I wonder if Pandas only likes U.S. dates out of the box 🤔
@alan_couzens @Rohan could seem that way. On the command 'Add missing dates' I get: TypeError: value should be a 'Timestamp' or 'NaT'. Got 'int' instead.

@eoutzen @Rohan

Thanks Emil. Weird that it's coming across as an integer 🤔

If Rohan's reformatting the columns doesn't work, you could add the following after the data=pd.read_csv line and it should work

data['WorkoutDay'] = data['WorkoutDay'.astype(str)

data['WorkoutDay'] = pd.to_datetime(data['WorkoutDay'], format='%d/%m/%Y')

Assuming that your dates in your csv are in d/m/Y format. This will convert that column to the datetime that pandas is looking for.

@eoutzen @Rohan Thanks guys. Missing dates error should be fixed now.
@alan_couzens @eoutzen @Rohan thanks for posting...just cut and paste from original link?
@alan_couzens @Rohan yes, does indeed run. But I seem to get a corner solution equal to the upper bound on the initial P0 guess. Is that typical?

@eoutzen @Rohan No, I wouldn't say that's typical. If I run it for my team, I get P0's spanning the entirety of the range.

Much will come down to how much variability there is in the fitness data given.

Also, for athletes with a long training history, it's possible that you may want to adjust P0 to a higher bound to account for a higher 'residual' VO2max.

@alan_couzens @eoutzen

Is 12 months of data enough?

Working with 12 months of data I'm finding that P0, the CTL constant and the ATL constant typicaly come out at whatever the initial guesses were. With a low initial guess for P0, k1 comes out high and with a high initial guess of P0 it comes out low. With an educated guess I get k1 = 0.3. k2 always comes out at 0.0.

@Rohan @eoutzen Yes, 12mo is generally a good balance between having enough data and having the data representative of your current physiology.

With k2 coming out to 0, it tells me that the model isn't seeing your performance swing very much with load and rest. This could be due to your physiology or, more likely, due to there not being a lot of variability in the data. In this way, all of the performance variability gets thrown onto k1 and CTL, leading to it returning the initial guesses.

@alan_couzens how far back is recommended in terms of exporting workouts? 3 months, 6 months?
@Trisportsdoc Somewhere in that range, depending how much data you have. Shorter has the advantage of being more "current" to your physiology but the disadvantage of having fewer data points.
@alan_couzens
I struck another problem unrelated to you code. Training Peaks doesn’t have my FTP history, hence the TSS data Is nonsense. If there isn’t a way to correct it I guess I’ll try writing a Golden Cheetah version.
@Rohan If it doesn't vary too much, you can update it manually for previous dates in the PMC on the "Dashboard" tab. Or you could just update the TSS values programmatically in the data frame if you know when FTP changed.

@alan_couzens

Unfortunately, it looks like that is a premium feature.