Garmin Vectors Finally Released



Originally Posted by sitzmark .

If awards in competition were to be given out like awards for power lifting, then quantitative accuracy needs to be spot on. Since cycling competition revolves around distance vs. time, accuracy of power output is of no real (competitive) value. ... a nice to have, but not a requirement for what power meters are used for. IMO
That might be the case for what you use power meters for, but it's not the case for all uses of power meter data, nor for everyone.

As Robert points out, there are some applications for power data where a high degree of accuracy matters and can result in substantial performance improvement to be gained from such actionable intelligence.
 
Originally Posted by Alex Simmons .


That might be the case for what you use power meters for, but it's not the case for all uses of power meter data, nor for everyone.

As Robert points out, there are some applications for power data where a high degree of accuracy matters and can result in substantial performance improvement to be gained from such actionable intelligence.
True, I stated my application is very basic. However, I still cannot see applications where accuracy is more important than precision (sensitivity).

Like a speedometer, as long as the degree of precision is high, I can deal with inaccuracy. If there is an inherent 10mph bias, but high precision, then a correction factor of the result gives me both high accuracy and high precision.

Conversely, if precision is bad then accuracy (instant) is equally as bad.
 
Originally Posted by sitzmark .


True, I stated my application is very basic. However, I still cannot see applications where accuracy is more important than precision (sensitivity).

Like a speedometer, as long as the degree of precision is high, I can deal with inaccuracy. If there is an inherent 10mph bias, but high precision, then a correction factor of the result gives me both high accuracy and high precision.

Conversely, if precision is bad then accuracy (instant) is equally as bad.
Well, when power meters are inaccurate they're rarely so reliably inaccurate in a way that's easily correctable like that. When speedometers and odometers are off, they're typically off by a fixed percentage (like, 10%) or else by a fixed percentage plus an offset. Power meters are actually far more complex than a speedometer or odometer or, as Tolstoy might have said, "all accurate power meters are alike. Each inaccurate power meter is inaccurate in its own way."
 
Originally Posted by RChung .


Well, when power meters are inaccurate they're rarely so reliably inaccurate in a way that's easily correctable like that. When speedometers and odometers are off, they're typically off by a fixed percentage (like, 10%) or else by a fixed percentage plus an offset. Power meters are actually far more complex than a speedometer or odometer or, as Tolstoy might have said, "all accurate power meters are alike. Each inaccurate power meter is inaccurate in its own way."
But once the individual bias has been identified, as long as the meter is precise, it should be possible to dial in accuracy - en route if the if the reporting device allows adjustment, or post-ride through data manipulation. No? I suspect contributing factors of inaccuracy extend beyond the meter itself - crank flex, ring flex, pedal and spindle flex, etc.
 
RChung said:
Well, when power meters are inaccurate they're rarely so reliably inaccurate in a way that's easily correctable like that. When speedometers and odometers are off, they're typically off by a fixed percentage (like, 10%) or else by a fixed percentage plus an offset. Power meters are actually far more complex than a speedometer or odometer or, as Tolstoy might have said, "all accurate power meters are alike. Each inaccurate power meter is inaccurate in its own way."
Not having read the paper yet, have you done an error analysis to see how error or uncertainty propagate through your indirect CdA measurement?
 
Originally Posted by sitzmark .

But once the individual bias has been identified, as long as the meter is precise, it should be possible to dial in accuracy - en route if the if the reporting device allows adjustment, or post-ride through data manipulation. No? I suspect contributing factors of inaccuracy extend beyond the meter itself - crank flex, ring flex, pedal and spindle flex, etc.
Are you thinking the bias is constant or maybe linear? Typically, it's not -- or rather, some components of the error are fixed and others aren't. That makes post hoc adjustment difficult, especially since we now have many derived measures that depend on the data stream, not just mean values. A simple example is NP, which depends on the entirety of the data stream. If you know that the overall mean power is off by X watts as in your example, you can easily adjust the mean but adjusting the data so both mean power and NP are correct isn't easy. Which, again, is not to say that absolute accuracy is necessary for every application for every rider. Training, for example, is among the least demanding uses for a power meter. However, when you do need accuracy it's not as easily fixed as you seem to think.

Originally Posted by alienator .


Not having read the paper yet, have you done an error analysis to see how error or uncertainty propagate through your indirect CdA measurement?
For some sources of error, yes, though not in the version that's on the web. That paper is already pretty long.
 
Originally Posted by RChung .

Are you thinking the bias is constant or maybe linear? Typically, it's not -- or rather, some components of the error are fixed and others aren't. That makes post hoc adjustment difficult, especially since we now have many derived measures that depend on the data stream, not just mean values. A simple example is NP, which depends on the entirety of the data stream. If you know that the overall mean power is off by X watts as in your example, you can easily adjust the mean but adjusting the data so both mean power and NP are correct isn't easy. Which, again, is not to say that absolute accuracy is necessary for every application for every rider. Training, for example, is among the least demanding uses for a power meter. However, when you do need accuracy it's not as easily fixed as you seem to think.
We see similar inaccuracies in hospital diagnostic testing equipment – both linear and exponential - stemming from complex sources of variation. Through sufficient data collection we account for those variations (CV) and develop predictive data sets to deal with and factor out the accuracy issues (to whatever acceptable deviation and sensitivity is required), or we don't have a viable product.

In use, whatever accuracy the instrument has may (or may not) be correlated to an existing array of existing data collection devices (instruments from competing manufacturers) upon initial laboratory placement. From then on accuracy and precision is constantly monitored and adjusted as required/appropriate.

For multi–site locations where physicians need a comparative test result, it is critical that a correlation factor be devised for each instrument so all instruments in the array generate an "accurate" result that attending physicians can interpret against a fixed scale. In some cases the correlation factor ends up shifting the result to be "inaccurate" because the organization wants the results to correlate with an outside laboratory that uses a different array of instrumentation.

I'm not suggesting the correlation is easy – just possible once a "standard" has been identified.

(In theory this is what CycleOps has done with the PowerCal. Success is debatable.)
 
Originally Posted by sitzmark .


We see similar inaccuracies in hospital diagnostic testing equipment – both linear and exponential - stemming from complex sources of variation. Through sufficient data collection we account for those variations (CV) and develop predictive data sets to deal with and factor out the accuracy issues (to whatever acceptable deviation and sensitivity is required), or we don't have a viable product.

In use, whatever accuracy the instrument has may (or may not) be correlated to an existing array of existing data collection devices (instruments from competing manufacturers) upon initial laboratory placement. From then on accuracy and precision is constantly monitored and adjusted as required/appropriate.

For multi–site locations where physicians need a comparative test result, it is critical that a correlation factor be devised for each instrument so all instruments in the array generate an "accurate" result that attending physicians can interpret against a fixed scale. In some cases the correlation factor ends up shifting the result to be "inaccurate" because the organization wants the results to correlate with an outside laboratory that uses a different array of instrumentation.

I'm not suggesting the correlation is easy – just possible once a "standard" has been identified.

Ah, so you're familiar with the problem. Excellent. The issues are similar though power meters operate in a far less controlled environment, unit-to-unit variance can be large, and even for a given rider, his or her power production can vary from ride to ride. The statistical problems are really interesting (as an aside, that's why almost all the really interesting statistical advances since WWII have come in fields where you can't do controlled experiments). One of the reasons I use a version of virtual elevation to compare accuracy across meters is because when you place two power meters on the same bike you know they were facing the same terrain, mass, wind, temperature change, crank length, etc., so you can control those things and see how the power meters differ according to pedal force, pedal speed, chain ring and cog (and thus chain speed and chain tension) and so on. Once again, the issue isn't so much point estimates (as many diagnostic tests require) but the integrity of the entirety of the data stream. The question isn't so much "how close are two power meters on average?"; the question is "under which conditions do they differ, and by how much?"
 
Originally Posted by RChung .


Ah, so you're familiar with the problem. Excellent. The issues are similar though power meters operate in a far less controlled environment, unit-to-unit variance can be large, and even for a given rider, his or her power production can vary from ride to ride. The statistical problems are really interesting (as an aside, that's why almost all the really interesting statistical advances since WWII have come in fields where you can't do controlled experiments). One of the reasons I use a version of virtual elevation to compare accuracy across meters is because when you place two power meters on the same bike you know they were facing the same terrain, mass, wind, temperature change, crank length, etc., so you can control those things and see how the power meters differ according to pedal force, pedal speed, chain ring and cog (and thus chain speed and chain tension) and so on. Once again, the issue isn't so much point estimates (as many diagnostic tests require) but the integrity of the entirety of the data stream. The question isn't so much "how close are two power meters on average?"; the question is "under which conditions do they differ, and by how much?"
Is that not just the human factor.....A variable you will never ever be able to dial down?

You can focus on all the other variable until the cows come home, but at the end of the day the biggest variable between PM's is not the PM's itself, but the variability in humans, you can't control that, therefore results will differ.

Maybe I'm missing something here..

Paul
 
Originally Posted by fluro2au .

Is that not just the human factor.....A variable you will never ever be able to dial down?

You can focus on all the other variable until the cows come home, but at the end of the day the biggest variable between PM's is not the PM's itself, but the variability in humans, you can't control that, therefore results will differ.

Maybe I'm missing something here..

Paul
If you put two power meters on the same bike, not only are environment conditions (like slope, temperature, wind, total mass) the same but also the rider's choice of standing or sitting, the gears chosen, whether to go easy or to go hard, and to spin fast or push hard. So we can look for the differences in how each power meter reports according to differences in all of those variables. However, I was emphasizing the highlighted variables because those are usually the ones of most interest: they're the ones that (historically) have shown the greatest differences between power meters. Two riders may have different riding styles but if you can nail down a difference to say "this power meter differs from the other at low cadence and high torque" that can be useful information for both riders.
 
Aside from regular riding, I'd like comparative tests to include specific examples of things like:
- maximal effort standing starts
- other maximal accelerations
- what happens when you stop pedalling
- and start again
- steady state non stop pedalling outdoors and indoors on trainer (e.g. to see if there's any relative drift)
- climbing
- very low and very high cadences

Unfortunately some differences in reported power are not confined to the power meter alone and are influenced by choice of cycle-computer and its settings.
 
Yes, we discussed several of these when setting up the protocols for Ray's multi-PM test rides. Of your list the only thing we didn't do was max effort standing starts.
 
......and the price drops continue.

Today it is Power2Max selling the FSA version for $900. Of course one would need a compatible bike frame and be content with FSA crank arms use whatever rings in 130 or 110 BCD, but this is continued good news for consumers. P2M seems to be a serious player and seem to be interested in improving their product as well.
 
Felt_Rider said:
......and the price drops continue. Today it is Power2Max selling the FSA version for $900. Of course one would need a compatible bike frame and be content with FSA crank arms use whatever rings in 130 or 110 BCD, but this is continued good news for consumers. P2M seems to be a serious player and seem to be interested in improving their product as well.
I don't see SRM reducing prices, but maybe SRAM....er......Quarq will cave and lower theirs.
 
Quote: Originally Posted by Felt_Rider .
......and the price drops continue.

Today it is Power2Max selling the FSA version for $900. Of course one would need a compatible bike frame and be content with FSA crank arms use whatever rings in 130 or 110 BCD, but this is continued good news for consumers. P2M seems to be a serious player and seem to be interested in improving their product as well.


That is a very nice price however I can't seem to expunge the memory of my last FSA crank on a Mega Exo BB which managed to successfully loosen itself by the time I had ridden the 1/4mile home from the bike shop (same mechanic who installed my Ultra Torque BB without it clicking). I also have a friend who always rides with a tool as he tends to experience the same on his FSA unit.

Does anyone know if they had a bad product generation, or did we just have an unfortunate concurrence of circumstance and possibly catch our respective mechanics at the end of shift on a Friday? The FSA loosening issue seems to have reached many knitting circles.
 
danfoz said:
That is a very nice price however I can't seem to expunge the memory of my last FSA crank on a Mega Exo BB which managed to successfully loosen itself by the time I had ridden the 1/4mile home from the bike shop (same mechanic who installed my Ultra Torque BB without it clicking). I also have a friend who always rides with a tool as he tends to experience the same on his FSA unit. Does anyone know if they had a bad product generation, or did we just have an unfortunate concurrence of circumstance and possibly catch our respective mechanics at the end of shift on a Friday? The FSA loosening issue seems to have reached many knitting circles.
FSA says they fixed such issues, but whether the users of FSA cranks agree is something I don't know. A pedal thread insert failed on an FSA crankset I had years ago, so I'm suspicious of their QC, quality, and reliability, too.