Over the past few months I have noticed a significant (usually around 10+ watts) difference in my predicted and actual 20 minute power using a 3 and 10 minute critical power test. Or, put another way, a difference between my predicted 10 minute power and actual 10 minute power when using a 3 and 20 minute critical power test. My actual 10 minute power always leads to a predicted 20 minute power that is unsustainable.
Example:
3-min: 450 watts
10 minute: 390 watts (predicted 10 using 3 and 20=378)
20 minute: 360 watts (predicted 20 using 3 and 10=380)
My theory is that the amount of standing up and sprinting I do towards the end of the tests effect it. In both the 10 and 20 minute tests I began to stand up more towards the end of the test as I fatigue, and finish the test with 20 or 30 seconds all out sprinting. I feel like this affects the average of the shorter test way more and throws off the Critical Power calculations. Anyone have a beter idea?
Example:
3-min: 450 watts
10 minute: 390 watts (predicted 10 using 3 and 20=378)
20 minute: 360 watts (predicted 20 using 3 and 10=380)
My theory is that the amount of standing up and sprinting I do towards the end of the tests effect it. In both the 10 and 20 minute tests I began to stand up more towards the end of the test as I fatigue, and finish the test with 20 or 30 seconds all out sprinting. I feel like this affects the average of the shorter test way more and throws off the Critical Power calculations. Anyone have a beter idea?