Data (was PowerCranks Study)



Status
Not open for further replies.
Originally posted by Andy Coggan
"Robert Chung" <[email protected]> wrote in message news:[email protected]...
> Andy Coggan wrote:
> > As well, he can be one of the more oblique posters...
>
> Oblique? Moi?
>
> My point of view is that statistical validity isn't like a lightswitch with on and off positions:
> research findings aren't necessarily invalid if the significance level is 94% nor do they
> necessarily become valid when its significance level reaches 96%. Research validity has many more
> dimensions to it than statistical validity. There are *so many* ways studies can go bad even if a
> p-value looks good.

That's what I thought you meant (and I concur). Unfortunately, there are those (typically not
work-a-day scientists) that live and die by the P value.

Andy Coggan

And so much research finds significant differences that are not very meaningful, particularly in the world of sports and exercise science!
 
Andy Coggan wrote:
> That's what I thought you meant

You got that much from an "oh dear?"
 
"2LAP" <[email protected]> wrote in message news:[email protected]...
> Andy Coggan wrote:
> > "Robert Chung" <[email protected]> wrote in message
news:3f8bfb5d$0$10-
> >
[email protected]:[email protected]...
> > > Andy Coggan wrote:
> > > > As well, he can be one of the more oblique posters...
> > >
> > > Oblique? Moi?
> > >
> > > My point of view is that statistical validity isn't like a lightswitch with on and off
> > > positions: research findings aren't necessarily invalid if the significance level is 94% nor
> > > do they necessarily become valid when its significance level reaches 96%. Research validity
> > > has many more dimensions to it than statistical validity. There are *so many* ways studies
> > > can go bad even if a p-value looks good.
> > That's what I thought you meant (and I concur). Unfortunately, there are those (typically not
> > work-a-day scientists) that live and die by the P value. Andy Coggan
>
> And so much research finds significant differences that are not very meaningful, particularly in
> the world of sports and exercise science!

Actually, the exact opposite is true: most studies in the world of sports and exercise science are
underpowered to detect differences (in performance) that are relevant to athletes. To state it
another way: if differences in performance of the magnitude that mean medaling vs. not medaling at,
say, the Olympics were what you used to power your study, you'd likely use far more subjects than
typically used (if you could afford to, that is).

Andy Coggan
 
I guess Racer X is a researcher with an infinite budget and supply of elite riders, which is why he
has such outstanding data...uh, wait a second, I know of zero studies published by Racer X.

"Andy Coggan" <[email protected]> wrote in message
news:<[email protected]>...
> "2LAP" <[email protected]> wrote in message news:[email protected]...
> > Andy Coggan wrote:
> > > "Robert Chung" <[email protected]> wrote in message
> news:3f8bfb5d$0$10-
> > >
> [email protected]:[email protected]...
> > > > Andy Coggan wrote:
> > > > > As well, he can be one of the more oblique posters...
> > > >
> > > > Oblique? Moi?
> > > >
> > > > My point of view is that statistical validity isn't like a lightswitch with on and off
> > > > positions: research findings aren't necessarily invalid if the significance level is 94%
> > > > nor do they necessarily become valid when its significance level reaches 96%. Research
> > > > validity has many more dimensions to it than statistical validity. There are *so many*
> > > > ways studies can go bad even if a p-value looks good.
> > > That's what I thought you meant (and I concur). Unfortunately, there are those (typically
> > > not work-a-day scientists) that live and die by the P value. Andy Coggan
> >
> > And so much research finds significant differences that are not very meaningful, particularly in
> > the world of sports and exercise science!
>
> Actually, the exact opposite is true: most studies in the world of sports and exercise science are
> underpowered to detect differences (in performance) that are relevant to athletes. To state it
> another way: if differences in performance of the magnitude that mean medaling vs. not medaling
> at, say, the Olympics were what you used to power your study, you'd likely use far more subjects
> than typically used (if you could afford to, that is).
>
> Andy Coggan
 
Originally posted by Andy Coggan
Actually, the exact opposite is true: most studies in the world of sports and exercise science are
underpowered to detect differences (in performance) that are relevant to athletes. To state it
another way: if differences in performance of the magnitude that mean medaling vs. not medaling at,
say, the Olympics were what you used to power your study, you'd likely use far more subjects than
typically used (if you could afford to, that is).

Andy Coggan
I'm not so sure; most studies have small numbers due to the magnitude of the changes that occur during studies and these studies often have little relevance to athletes or athletes that are able to 'medal at olympics'.

I would argue that 'the sports world' gets very little 'value for time/money' from research that is aimed at sports performance (i.e. lots of papers/money/time with very little impact on performance).
 
Originally posted by Robert Chung
Andy Coggan wrote:
> As well, he can be one of the more oblique posters...

Oblique? Moi?

My point of view is that statistical validity isn't like a lightswitch with on and off positions:
research findings aren't necessarily invalid if the significance level is 94% nor do they
necessarily become valid when its significance level reaches 96%. Research validity has many more
dimensions to it than statistical validity. There are *so many* ways studies can go bad even if a
p-value looks good.
While I agree with this, there does need to be a cut off point is (if 94% is fine, what about 92% or 80%?). People also need to consider the meaningfulness of their studies, what is the point of demonstrating that a variable can change by 1% when in reality such a small change has little relevance or importance outside getting a publication.
 
Tell me about it! I've bailed on far more projects than I've completed largely due to $0.

> As the saying goes, you get what you pay for, i.e., there is little or no funding for the types of
> research that you seem to have in mind. If there were, there'd be a lot more bright exercise
> scientists studying sports performance, rather than chasing NIH dollars by studying the
> health-related benefits of exercise.
>
> Andy Coggan
 
In article <[email protected]>, "Robert Chung" <[email protected]> wrote:

> 2LAP wrote:
> >
> > While I agree with this, there does need to be a cut off point is (if 94% is fine, what about
> > 92% or 80%?).
>
> Do you cook?

Well, that answer certainly was framed in a way that I can relate to. Nice one, Robert. sign me:
Chef How-R-Dee

--
tanx, Howard

"We've reached a higher spiritual plane, that is so high I can't explain We tell jokes to make you
laugh, we play sports so we don't get fat..." The Dictators

remove YOUR SHOES to reply, ok?
 
2LAP wrote:
>
> While I agree with this, there does need to be a cut off point is (if 94% is fine, what about 92%
> or 80%?).

Do you cook? Some people cook from the recipes in a cookbook. They follow the recipes exactly and
never taste the food while they're cooking. If you follow the recipe exactly you get reasonably good
results and nothing is way out of whack. But if you taste the food while you're cooking sometimes
you'll find that the apples are a bit **** and you'd be better off adding a bit more sugar, or the
chicken needs more garlic, or the salad dressing needs more fish sauce, or that cumin would really
help the pumpkin soup. Then the recipe is a framework and a guideline within which you balance and
adjust things to get the best result. If you know what you're doing you get better results if you
taste and adjust and improvise. If you don't know what you're doing the dinner can end up a disaster
and you'd have been better off following the recipe. Having an exact cut-off is like following the
recipe exactly: it tends to protect your research findings from ending up as indigestible garbage.
If you know what you're doing then the p-level is just another parameter you consider when you're
trying to produce the best research.
 
"2LAP" <[email protected]> wrote in message news:[email protected]...
> Andy Coggan wrote:
> > Actually, the exact opposite is true: most studies in the world of sports and exercise science
> > are underpowered to detect differences (in performance) that are relevant to athletes. To
> > state it another way:
if
> > differences in performance of the magnitude that mean medaling vs. not medaling at, say, the
> > Olympics were what you used to power your study, you'd likely use far more subjects than
> > typically used (if you could afford to, that is). Andy Coggan
>
>
> I'm not so sure; most studies have small numbers due to the magnitude of the changes that occur
> during studies

You need to read Will Hopkins treatise on the subject. As he points out, the vast majority of
studies are underpowered to detect changes in performance of the magnitude that is important in
high-level competition (where differences of 0.5% or less are often critical). (Although they may be
adequately powered to detect differences in the primary outcome variable, which is rarely
performance.)

> and these studies often have little relevance to athletes or athletes that are able to 'medal at
> olympics'.

If by that you mean that performance is rarely the primary outcome variable, then I'd probably agree
with you. If, OTOH, you mean that studies need to be done on elite or near-elite athletes to have
external validity, I'd disagree, and rather vehemently at that. There is nothing *qualitatively*
unique individuals who make it to the top of the sport, i.e., they are still human beings, and what
is learned from studying the physiology of "lesser specimens" is just as valid.

> I would argue that 'the sports world' gets very little 'value for time/money' from research that
> is aimed at sports performance (i.e. lots of papers/money/time with very little impact on
> performance).

As the saying goes, you get what you pay for, i.e., there is little or no funding for the types of
research that you seem to have in mind. If there were, there'd be a lot more bright exercise
scientists studying sports performance, rather than chasing NIH dollars by studying the
health-related benefits of exercise.

Andy Coggan
 
Originally posted by Robert Chung

Do you cook? Some people cook from the recipes in a cookbook. They follow the recipes exactly and
never taste the food while they're cooking. If you follow the recipe exactly you get reasonably good
results and nothing is way out of whack. But if you taste the food while you're cooking sometimes
you'll find that the apples are a bit **** and you'd be better off adding a bit more sugar, or the
chicken needs more garlic, or the salad dressing needs more fish sauce, or that cumin would really
help the pumpkin soup. Then the recipe is a framework and a guideline within which you balance and
adjust things to get the best result. If you know what you're doing you get better results if you
taste and adjust and improvise. If you don't know what you're doing the dinner can end up a disaster
and you'd have been better off following the recipe. Having an exact cut-off is like following the
recipe exactly: it tends to protect your research findings from ending up as indigestible garbage.
If you know what you're doing then the p-level is just another parameter you consider when you're
trying to produce the best research.
Even in cooking and in research there are rules to follow (or manipulate as you see fit). Its not too helpful to steer to far from the recipie for food or research, no matter how well intentioned you are!

Oh, you should post your recipie for pumpkin soup in time for halloween! ;)
 
Originally posted by Andy Coggan
"2LAP" <[email protected]> wrote in message
> I would argue that 'the sports world' gets very little 'value for time/money' from research that
> is aimed at sports performance (i.e. lots of papers/money/time with very little impact on
> performance).

As the saying goes, you get what you pay for, i.e., there is little or no funding for the types of
research that you seem to have in mind. If there were, there'd be a lot more bright exercise
scientists studying sports performance, rather than chasing NIH dollars by studying the
health-related benefits of exercise.

Andy Coggan
Of course there is little funding for sports research since it has little impact on public health (CHD costs the UK about £2bn a year and don't let me get started on the cost of falls in the elderly), etc. In the UK there is very little funding for research from national governing bodies (i.e. the end users of sports research (coaches and athletes having insufficent money to fund research directly)) despite lots of funding from the lottery. I think this reflects the perception of the NGB's on the benefits of research particularly in the short term. Some of the techniques used by 'sport scientists' working in an applied setting imploy very little sport science, making me question the impact/importance of sport science research on sports performance.

In addition to the financial side of research, this lack of 'importance' placed on sport science was one of the reasons that I decided to study the impacts of exercise on risk factors in atherosclerosis. I supose I am another young (perhaps not bright scientist) that has been steered away from sport to study the health benefits of exercise. Having had my time again, I would be working in biochem; having been 'turned off' mainly by the egos of Doc's and Profs at my institute (although from experiance not unique to my institute) I will be having a change of direction (i.e. not sport and exercise science) so my qualifications and research interests reflect my developing interests (there are still a few big and interesting questions out there waiting to be answered... the body is a wonderful thing!).

As James D Watson wrote... 'one could not be a successful scientist without realizing that, in contrast to the popular conception supported by newspapers and mothers of scientits, a goodly number of scientists are not only narrow minded and dull, but also just stupid'.

Sorry for the rant!
 
2LAP wrote:
>
> Even in cooking and in research there are rules to follow (or manipulate as you see fit). Its not
> too helpful to steer to far from the recipie for food or research, no matter how well intentioned
> you are!

This is true. Good intentions are rarely a reliable substitute for good judgement. And I would never
manipulate a p-level, just as I would never manipulate a mean. However, sometimes you can do really
good research with findings others would reject as weak. Several years ago a friend was working on
her dissertation and was despondent to find that her key result wasn't significant at the 5% level.
She was about to chuck the entire thing. I pointed out that significance is a function of sample
size, her sample was small, she had a p-value of around .12 (if I recall correctly), the sign and
magnitude of the effect were what she'd predicted, and her predictions were based on an original
theory. She kept that chapter in and reported it as non-conclusive but a tantalizing direction for
future research. After finishing her degree she wrote a grant based on that chapter to go out and
collect extra data. It was funded and she was able to show that additional data supported her
hypothesis. It was subsequently corroborated by other researchers with other data. This was good and
valid research that added to the body of knowledge, but it didn't originally meet a 95% level.

This actually goes to the heart of what Jim Martin was saying way up near the top of this thread
when he said that one of the criticisms was that the original findings weren't hypothesis-driven. I
haven't seen the study in question so I can't talk directly about it. I can only say that it's quite
a different thing to toss all possible contrasts into the hopper and report on the one(s) that are
statistically significantly different than it is to develop a hypothesis that predicts a difference
in one particular variable, then devise an experiment to test that hypothesis, then to see if your
hypothesis is supported by the evidence. I don't know where this particular study falls, so it's a
perfectly legitimate question to ask.

> Oh, you should post your recipie for pumpkin soup in time for halloween! ;)

Hmmm. I don't think I have a recipe for it. I kind of taste and adjust.
 
2LAP wrote:
>> Oh, you should post your recipie for pumpkin soup in time for halloween! ;)

Robert Chung (rbr warlock == statistician) wrote:
> Hmmm. I don't think I have a recipe for it. I kind of taste and adjust.

Double, double, toil and trouble. Presumably your impromptu recipe includes poison'd entrails,
toads, fillet of a fenny snake, frogs toes etc.
 
Donald Munro wrote:
> 2LAP wrote:
>>> Oh, you should post your recipie for pumpkin soup in time for halloween! ;)
>
> Robert Chung (rbr warlock == statistician) wrote:
>> Hmmm. I don't think I have a recipe for it. I kind of taste and adjust.
>
> Double, double, toil and trouble. Presumably your impromptu recipe includes poison'd entrails,
> toads, fillet of a fenny snake, frogs toes etc.

Frog, yes. Rather not a lot of pumpkin, really. Actually, none. Nasty stuff. Bland, hackneyed
recipes. I use peaches. Just tell 'em it's pumpkin. They're usually in that festive sort of mood and
after a few cocktails they'll believe anything. Setting the stage, you know. You show them pumpkins,
you tell them pumpkins, you serve peaches. You could even tell them "peaches" if you say it fast.
They both start with "p." Helps to mumble. Frogs and peaches. Hard to top that. Few things better
than a ripe peach and a damn fine frog. Here in France pumpkins are called "citrouille." And of
course, the people are called "Frogs." Hah. Grenouilles and citrouilles. But they're really peaches.
 
Originally posted by Robert Chung
Donald Munro wrote:
> 2LAP wrote:
>>> Oh, you should post your recipie for pumpkin soup in time for halloween! ;)
>
> Robert Chung (rbr warlock == statistician) wrote:
>> Hmmm. I don't think I have a recipe for it. I kind of taste and adjust.
>
> Double, double, toil and trouble. Presumably your impromptu recipe includes poison'd entrails,
> toads, fillet of a fenny snake, frogs toes etc.

Frog, yes. Rather not a lot of pumpkin, really. Actually, none. Nasty stuff. Bland, hackneyed
recipes. I use peaches. Just tell 'em it's pumpkin. They're usually in that festive sort of mood and
after a few cocktails they'll believe anything. Setting the stage, you know. You show them pumpkins,
you tell them pumpkins, you serve peaches. You could even tell them "peaches" if you say it fast.
They both start with "p." Helps to mumble. Frogs and peaches. Hard to top that. Few things better
than a ripe peach and a damn fine frog. Here in France pumpkins are called "citrouille." And of
course, the people are called "Frogs." Hah. Grenouilles and citrouilles. But they're really peaches.
Sounds like your on the Atkins diet... don't get me started on atkins!!! ;)
 
Kurgan Gringioni wrote:
> "2LAP" <[email protected]> wrote in message news:[email protected]...
>>
>> Sounds like your on the Atkins diet... don't get me started on atkins!!! ;)
>
> Fattie -
>
> Why not? It's overweight people that need it.

Fair enough. How many calories in a frog?
 
In article <[email protected]>, Donald Munro <[email protected]> wrote:

> 2LAP wrote:
> >> Oh, you should post your recipie for pumpkin soup in time for halloween! ;)
>
> Robert Chung (rbr warlock == statistician) wrote:
> > Hmmm. I don't think I have a recipe for it. I kind of taste and adjust.
>
> Double, double, toil and trouble. Presumably your impromptu recipe includes poison'd entrails,
> toads, fillet of a fenny snake, frogs toes etc.

On Rocky and Bullwinkle, there was a segment called Fractured Fairy Tales. Their version went: "A
pinch of this, a pinch of that, a dewey soap and a french-fried bat!" Of course, that'd be a
"Freedom-fried" bat now...

--
tanx, Howard

"We've reached a higher spiritual plane, that is so high I can't explain We tell jokes to make you
laugh, we play sports so we don't get fat..." The Dictators

remove YOUR SHOES to reply, ok?
 
Status
Not open for further replies.

Similar threads

P
Replies
3
Views
835
F
P
Replies
5
Views
816
R