On Mar 13, 12:00 pm, Ben C <
[email protected]> wrote:
> On 2008-03-10, [email protected] <[email protected]> wrote:
>
>
>
> > On Mar 8, 4:10 am, Ben C <[email protected]> wrote:
>
> >> > In the hypothetical you mentioned, with a sensor 4cm
> >> > away from the ear, the sensor will hear some combination
> >> > of sound waves, which arrive from several different
> >> > directions. Consider 2000 Hz background sounds. The
> >> > speed of sound is ~344 m/s and the wavelength of a
> >> > 2000 Hz signal is 17 cm. Thus, your ear is about 1/4
> >> > wavelength separated from the sensor. If you just
> >> > generated the opposite of whatever the sensor heard,
> >> > it would be 1/4 wave offset from what you want. The
> >> > problem is that you can't predict whether it's ahead
> >> > or behind, because you don't know which direction the
> >> > sound is coming from, and it probably is several signals
> >> > from multiple directions.
>
> >> Why does it matter which direction it came from? That gives you a delay
> >> and phase difference between your two ears, but I'm assuming you have an
> >> independent sensor and anti-noise generator on each earpiece.
>
> > Because it gives you a phase delay between
> > the mike on your left ear and your left eardrum,
> > and the length of that delay depends on which
> > direction the sound is coming from.
>
> The phase just depends on the path length, not on the direction as far
> as I can see.
The difference in path length from mike to eardrum
depends on direction.
> > The noise
> > canceling has to cope with a stew of different
> > waves coming from different directions - not just
> > the cellphone-chatterer on your right, but his
> > voice bouncing off the window at your left and
> > coming back from the left. You won't hear that
> > phase difference normally, but it's there.
>
> The difference in path length is what causes the different phase there.
> The sound bouncing off the window has travelled further, maybe it gets
> inverted when it bounces as well, but the result is it's likely to be
> out of phase with the original signal when it reaches you.
>
> > Thus, the signal at your eardrum is not quite
> > the same as the signal at the mike due to
> > phase decorrelation, and this is worse at
> > high frequencies because the sound waves
> > have shorter wavelengths.
>
> I'm thinking if the earpieces are hemispherical and basically closed
> then they transmit sound by vibrating sympathetically. So long as the
> speaker in them generates the anti-vibration then they should work. The
> direction of the original source of noise shouldn't make much
> difference. But a complex signal is probably harder to make an
> anti-signal for, and a shrill voice that's bounced around the inside of
> a bus probably is quite a complex signal.
I'm not sure you are comprehending what I am saying.
The signal at the microphone is a superposition
of a variety of different sound waves. The signal
at the eardrum is a slightly different superposition
of sound waves. Because the signals come
from different directions, some of them were delayed
from mike to eardrum and some are ahead.
(That's phase decorrelation.)
That means the anti-signal at the eardrum is different
from the anti-signal at the mike. Even if you generate
a perfect anti-signal for the mike, it won't be perfect
for the ear.
There is a finite travel time from speaker to eardrum as
well, so if the speaker was very small and was located
exactly at the same place as the mike, ostensibly the
delay of the anti-signal would be the same as the
delay of the signal. This wouldn't really work in practice,
first you can't satisfy the assumptions because the
speaker and mike are finite sized, second it wouldn't
be able to correct the signals that got to the ear slightly
_before_ the mike.
Ben