On 10 Jun 2006 16:15:51 -0700, "Jay Beattie"
<
[email protected]> wrote:
>
>Tony Raven wrote:
>> Jay Beattie wrote:
>> >
>> > I don't think there are any population studies on children in the
>> > United States regarding the incidence of head injury and the effect of
>> > helmets on injuries. I would like to see one if there is, because this
>> > populaiton is the subject of MHLs in the US rather than adults
>> > (distinguishing laws from county ordinances). -- Jay Beattie.
>> >
>>
>> There is a study for chez Sorni in San Diego (Ji et al). The authors
>> couldn't find any reduction in head injury rates as a result of the
>> helmet legislation - just as in New Zealand, Australia........
>
>I found that report and read it. I don't know what it means. Is
>chi-squared a drink at Starbucks? I just skip to the end where it says
>"we don't think the data means anything one way or the other." I
>suppose it is important to publish inconclusive studies?
>
>What about this one in Quebec. http://tinyurl.com/fmuha Any good
>scientifically-speaking? -- Jay Beattie.
Dear Jay,
Yes, it is important to publish inconclusive studies, even though at
first it would seem pointless.
In small statistical studies (by far the most popular kind, given the
constraints of time and money), it's common to end up with a
mathematical caveat that the figures look to be 95% certain, meaning
that similar results are expected 19 out of 20 times.
The trouble is that publication bias tends toward the dramatic, so the
20th study that in good faith (or otherwise) finds a different outcome
is the one that gets published, turning things on their heads.
So now we have a single study that shows that red bicycles go faster.
The other 19 researchers try to duplicate the result, but they get the
more typical result that bicycle color doesn't make any difference. If
they have infinite patience, funding, and communication, they might
get together and publish a paper showing that the original red-is-fast
study was mistaken. But that will take time, trouble, and money, so
the mistaken study is likely to enjoy considerable credibility for
quite a while.
An "inconclusive" study is always in fact a conclusion--the effect was
not observed. It's just not very attractive to go out and spend a lot
of time showing that something doesn't really seem to happen.
Jobst's book illustrates this with his tying-and-soldering experiment.
He took a good deal of time and trouble to set things up and showed
that there was no apparent change in wheel strength when he hung a
weight from the rim of a wheel held flat in a precise measuring rig,
whether the spokes were lashed together at the crossings or not. The
belief had been around for decades, but no one else had gone to the
trouble of demolishing the claim by showing that there just wasn't an
observable effect in a careful test.
Of course, if someone figures out a different test tomorrow and shows
that tying and soldering does strengthen a wheel in some way that
Jobst's test missed, then that test will have to stand up to scrutiny.
This is basically what James Randi has been doing for years with
dowsers. All the experimental testing agreed to by both sides has come
up with the same result--dowsers do no better than chance in
controlled testing. So Randi is just publishing "inconclusive"
results. Dowsers tend to rely on "publishing" the 20th test and
tossing out the 19 that show no ability.
You can get some entertaining background in this and other statistical
problems (along with a walloping dose of UK politics of a cranky kind)
by browing around here:
http://www.numberwatch.co.uk/number watch.htm
Again, a good deal of it will be puzzling outside the UK and Brignell
is not the most endearing fellow, but skipping the politics and snide
remarks will leave you free to follow his examples of statistical
mistakes.
Cheers,
Carl Fogel