• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

"velocity and soundpressure in the nearfield are out of phase". how does this work?

dasdoing

Major Contributor
Joined
May 20, 2020
Messages
4,301
Likes
2,768
Location
Salvador-Bahia-Brasil
from a here known Youtube video
Screenshot_20210310_173919_50_1_50.png



can anybody explain this further? what happens in the nearfield?
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
can anybody explain this further? what happens in the nearfield?
Normally, as a soundwave goes from source (speaker) to boundary (wall), velocity decreases and pressure increases. At the wall, pressure is maximum while velocity is zero. As it bounces back from the wall as a reflection pressure decreases and velocity increases. Same thing applies to a speaker's nearfield, where pressure is initially maximum and then decreases as the wave propagates.

The acoustical nearfield of a speaker acts as many chaotic sound sources which haven't combined to form a coherent wavefront (Huygens' principle below) with a clear direction of propagation, which is a plane wave. Before summation, the phase relationship between sound pressure and sound velocity is essentially random. This is where acoustical holography comes in and why so many measurement points are necessary for the NFS to capture exact propagation of nearfield sources and then compute their farfield values/summation.
1615410034066.png



I'm not an engineer and don't have a physics background so anyone looking in should please correct me.
 
OP
dasdoing

dasdoing

Major Contributor
Joined
May 20, 2020
Messages
4,301
Likes
2,768
Location
Salvador-Bahia-Brasil
Normally, as a soundwave goes from source (speaker) to boundary (wall), velocity decreases and pressure increases. At the wall, pressure is maximum while velocity is zero. As it bounces back from the wall as a reflection pressure decreases and velocity increases. Same thing applies to a speaker's nearfield, where pressure is initially maximum and then decreases as the wave propagates.

The acoustical nearfield of a speaker acts as many chaotic sound sources which haven't combined to form a coherent wavefront (Huygens' principle below) with a clear direction of propagation, which is a plane wave. Before summation, the phase relationship between sound pressure and sound velocity is essentially random. This is where acoustical holography comes in and why so many measurement points are necessary for the NFS to capture exact propagation of nearfield sources and then compute their farfield values/summation.
View attachment 117459


I'm not an engineer and don't have a physics background so anyone looking in should please correct me.

hard for me to understand this.
but what I understood is that the waves are still forming in the near-field?
why is this not a problem with headphones?
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
hard for me to understand this.
but what I understood is that the waves are still forming in the near-field?
why is this not a problem with headphones?
It is a "problem" with headphones.

Have you ever wonder why you need a good seal for good bass with headphones, in particular IEMs? We don't need to hermetically seal our rooms to hear bass when listening to speakers.

The bass we hear with small headphones is nearfield hydrodynamic waves (bulk motion of the fluid, which doesn't travel very far), not acoustic waves (elastic waves).

Try to slowly lift off your headphones, the low bass disappears first, then the mid bass goes, ... These are near field effects. For normal acoustic waves (as in listening to speakers), distance attenuation is frequency independent (for room size distances).
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
So all of the above is distance and frequency dependent.
why is this not a problem with headphones?
Headphones couple directly to your ear system. There is so little distance and volume between the driver and your eardrum that from a physical perspective the space is less like air (acoustics) than like a fluid (hydrodynamics), and sound moves a lot more efficiently in fluids than in air. However, this is, again, frequency dependent. At high frequencies you start seeing acoustical resonances related to the shape of the outer ear and volume of the ear canal, and for bass there needs to be a clean seal. (@NTK beat me to it.)

Maybe some definitions will help. A plane wave is a physical idealization. The exact definition is something like travel in a specific direction at a constant pressure where velocity and pressure are in phase.
1615476170121.png1615478208898.png
Real waves will of course show an attenuation of pressure with distance and have varying directivity and phase relationships. You can see this in speaker measurements, where drivers start to beam as the wavelength of the frequencies they reproduce starts approaching and then becomes smaller than the diameter of their radiating surface (not just the dome/cone; also includes part of the surround).

At 20kHz, the wavelength is 1.7cm, so you need to be at least that far away from the radiating surface to show beaming (i.e, planar characteristics). Without using re-radiating waveguides, phase plugs or crossovers to smaller drivers the higher frequencies would shoot in straight lines. Yet it is those re-radiations that make the nearfield so chaotic.
 

Tom Danley

Active Member
Technical Expert
Audio Company
Joined
May 6, 2021
Messages
125
Likes
581
from a here known Youtube video
View attachment 117455


can anybody explain this further? what happens in the nearfield?

I am not sure he means what it says. An established sound wave traveling through the air HAS the property that the velocity and pressure ARE 90 degrees out of phase. At the surface of or near a piston, the pressure and velocity have not formed a wave yet, to the degree there is a resistive load on the radiator, the impedance will tend towards resistive vs reactive and acoustic phase "in phase" with the input.
 

JustAnandaDourEyedDude

Addicted to Fun and Learning
Joined
Apr 29, 2020
Messages
518
Likes
820
Location
USA
I am not sure he means what it says. An established sound wave traveling through the air HAS the property that the velocity and pressure ARE 90 degrees out of phase.
The acoustic pressure perturbation and the acoustic velocity perturbation of an established traveling plane wave (and wave-train) are exactly in phase (as mentioned by pozz in post #5). The situation you mention applies to established standing waves, wherein the velocity is 90 degrees out of phase with the pressure. Due to sound attenuation during their journey, room/wall reflections are not as strong as the original, and so one might expect the pressure and velocity to be largely in phase in the midfield. In the nearfield vicinity of the speaker, where the Klippel measurements are made, the waves emanating from the driver and their reflections off the surrounding surfaces have not yet interfered/interacted and coalesced and straightened out into coherent wavefronts, and during this process the pressure (a scalar) and velocity (a directed quantity) will in general not be in phase.

Furthermore, as pointed out in earlier posts, in the nearfield there are significant hydrodynamics motions dominating the velocity, which do not travel at the acoustic speed but rather are related to the driver motion in order to satisfy the "no injection" or "no penetration" boundary condition at the surface of a non-porous driver, and thus are bulk fluid motions in step with the piston velocity. With speakers these hydrodynamic motions die out quickly with distance from the speaker, but as pointed out in previous posts they can be remain significant in small enclosed volumes like with headphones and earphones, in getting low-frequency non-acoustic pressure variations to the inner ear (I believe the latter would not distinguish the two transfer mechanisms of pressure variations). I expect that Christian of Klippel had the two reasons in this and the preceding para in mind when his slide states that "velocity and sound pressure are out of phase" (in the nearfield).

The 90 degree phase difference found in standing waves is a consequence of the linear superposition of two traveling wave-trains of equal frequency and amplitude but opposite vector directions interfering with each other. At each space-time location, the pressures sum and so do the velocities. However, at locations where the pressures (being the scalar multiplier of the unit isotropic part of the stress tensor) reinforce each other, the velocities (being directed vectors of opposite direction) oppose each other, and vice versa. So that the pressure nodes become the velocity anti-nodes, and vice versa.

Also, please note that even with the (very good) approximation of inviscid flow for the acoustic perturbations, the usual trade between energy in the pressure and the velocity (internal vs kinetic energy) characterized by the steady-flow Bernouilli equation (first integral of the energy equation) is not quite accurate in the unsteady-flow case that applies here. An additional time derivative of the pressure appears in the unsteady flow form, so care must be taken in applying it for reflection of an acoustic wave at a stationary rigid wall, though the same trends usually manifest. An additional factor arises at the moving driver/piston boundary where work is exchanged between driver and fluid, further modifying the unsteady Bernouilli equation. I imagine that the majority of this work transfer usually goes to the bulk hydrodynamics motions and other motions involving friction, which would account for the usually low conversion efficiency of (driver) mechanical energy to acoustic energy.
 

Tom Danley

Active Member
Technical Expert
Audio Company
Joined
May 6, 2021
Messages
125
Likes
581
Hi
I said that un clearly in an expanding sound wave the V and P are 90 degrees out of phase or shifted relative to each other.

This continues until the wavefront is approaching a plane wave. Perhaps it's also a semantic issue as "out of phase" without qualifier generally means opposite polarity rather than 90 degrees or other phase shift but it could also mean not 100% in phase.
I believe it would be more correct to say that in a standing wave AND interference patterns, the velocity maximum and pressure maximums in the wave appear to be stationary instead of propagating away at the speed of sound and to the degree the losses are small, they in near perfect quadrature (about 90 degrees phase shift between them).


For acoustic levitation in either a standing wave or interference type to take place, it is the stationary velocity well NOT sound pressure that levitates via Bernoulli force produced by the velocity flowing around the object.

A reference;
Pressure and particle V are in phase in a plane wave far field (a resistive condition) , but not with a spherical or expanding wavefront until the wave has expanded sufficiently to be a locally "a plane wave".



https://physics.stackexchange.com/q...een-particle-velocity-and-sound-pressure-in-s


"where if I recall correctly the minus sign means a wave traveling outwards and the plus means a wave traveling inwards. So:
u=f(ct∓x)r2±f′(ct∓x)r
p=ρcf′(ct∓x)r
It's because u and p have a different r dependence that you get the change in phase for distances below a wavelength or two."


Dr Patronis put it (in a discussion of the near-field of a device) that as long as the wavefront is expanding, it is a case of mass reaction and transfer . My take was that this resistivity dominated transition away from the source was something like reaching K=1 (the knee in the radiation resistance curve where making the radiator larger, does not increase efficiency) as well.


One can also see the effect of radiator size considering the acoustic phase shift a small radiator like a woofer produces in order to have "flat response" down low.

A direct radiator operating well below K=1, to be "flat" requires an acceleration response where the radiator velocity falls -6dB per octave and so is roughly 90 degrees behind, the sound roughly follows that Log20. . It is Velocity that couples to radiation resistance and so if one uses a horn to couple the radiation resistance of the "big end" of a horn to the small end, then "Flat" requires a constant velocity source not constant acceleration and so horn drivers typically have a much stronger motor and much smaller radiator so that they can produce Velocity profile to a higher frequency.
Best
Tom
 

JustAnandaDourEyedDude

Addicted to Fun and Learning
Joined
Apr 29, 2020
Messages
518
Likes
820
Location
USA
I said that un clearly in an expanding sound wave the V and P are 90 degrees out of phase or shifted relative to each other.

This continues until the wavefront is approaching a plane wave.
A reference;
Pressure and particle V are in phase in a plane wave far field (a resistive condition) , but not with a spherical or expanding wavefront until the wave has expanded sufficiently to be a locally "a plane wave".

https://physics.stackexchange.com/q...een-particle-velocity-and-sound-pressure-in-s

"where if I recall correctly the minus sign means a wave traveling outwards and the plus means a wave traveling inwards. So:
u=f(ct∓x)r2±f′(ct∓x)r
p=ρcf′(ct∓x)r
It's because u and p have a different r dependence that you get the change in phase for distances below a wavelength or two."
Thanks for clarifying what you meant, and thanks for the detailed explanations and the linked discussion on stackexchange. I mistook your previous statement to be about a traveling plane wave-train. Yes, you are quite right: in an expanding wave (like a spherical wave), the radial acoustic velocity (which is the only acoustic velocity component in a pure spherical acoustic wave) has two parts to it - a part that attenuates as 1/r like the acoustic pressure, and a second part that attenuates as 1/r^2. The first part is proportional to the acoustic pressure at the same time and location, while for the second part its time derivative is proportional to the spatial derivative of the acoustic pressure. If the acoustic pressure variation is sinusoidal in time and space, then the first part of the acoustic velocity will also be a sine wave in phase with the acoustic pressure. And the second part will be a cosinusoidal variation, i.e. a sinusoidal variation 90 degrees out of phase with the acoustic pressure, with the implied convention of the phase of each being relative to a peak or null. Please note that the effective radial distance is relative to the center of curvature of the acoustic wave, which should normally lie behind the driver diaphragm.

So you make a very good point that I overlooked in my previous post: that an expanding sound wave train has an acoustic velocity that in the near field is a summation of two parts that are in phase with the acoustic pressure and 90 degrees out of phase with it. In the farfield, the acoustic velocity tends asymptotically towards being in phase with the acoustic pressure. So this is another way in which the velocity can be "not-in-phase" with the pressure. So far, we have identified three mechanisms whereby the velocity nearfield is "not-in-phase" with the pressure nearfield: (a) what you pointed out with respect to an expanding acoustic wave, (b) the non-uniformities and wave reflections and interactions arising out of the geometric features of the driver and the surrounding parts of the speaker, and (c) the hydrodynamic velocity and pressure variations associated with the movement of the diaphragm. Please note that the details of the hydrodynamic motions are not accounted for in linear acoustic theory. They are the "background" or "mean" or "ambient" flow state about which the governing equations are expanded in a perturbation series from which the linear (acoustic approximation) equations are picked out to analyze sound. In practice, the mean flow is usually taken to be a quiescent ambient state (i.e., steady, with zero mean velocity). In the reality of the speaker nearfield, these hydrodynamic motions, which do not propagate at the speed of sound but rather are localized and tied to the motion of the diaphragm and which attenuate as 1/r^2, will include velocity contributions that cannot be in phase with the acoustic pressure that the Klippel is trying to estimate. I am guessing that Christian of Klippel is concerned with the velocity because it affects the microphone measurements of the subtle pressure variations (my mental analogy is the functioning of a pitot probe, but I know pretty much nothing about the detailed functioning of microphones). While Christian does not specify why the nearfield velocity and pressure are out of phase (I guess he assumes that his audience, if it cares, is knowledgeable enough to understand the reasons why), I understood his statement as noting a disadvantage of complexity in data analysis in the context of Nearfield Acoustic Holography versus the traditional farfield measurements for measuring a speaker's acoustic response (FR, directivity, etc.). Calibrating and backing out the acoustic pressures for farfield microphone measurements in an anechoic chamber should be simpler than NAH, given that the farfield acoustic velocity and pressure are practically in phase.

Perhaps it's also a semantic issue as "out of phase" without qualifier generally means opposite polarity rather than 90 degrees or other phase shift but it could also mean not 100% in phase.
I agree that the phrase "out of phase" in Christian's presentation slide is ambiguous in general, but in the context of his talk, I understood it to mean "not-in-phase", where the phase difference is variable. On the other hand, perhaps he was referring to the nearfield effect you pointed out with regard to spherical expanding waves.

I believe it would be more correct to say that in a standing wave AND interference patterns, the velocity maximum and pressure maximums in the wave appear to be stationary instead of propagating away at the speed of sound and to the degree the losses are small, they in near perfect quadrature (about 90 degrees phase shift between them).
I agree that that is a good way to look at the phenomena, as the practical sensation and measurements are of stationary oscillations, and the pressure and velocity have a 90 degrees phase difference with regard to the way most engineers would define phase in this situation (i.e., relative to maxima or nulls of each quantity). However, I preferentially interpret them as a superposition of counter-propagating or otherwise interfering acoustic waves because their fundamental nature is that of acoustic perturbations.

For the other parts of your reply, I believe they are valuable for ASR readers with a background in audio acoustics. I am neither an audio pro nor do I have any experience whatsoever in designing speakers and building them or analyzing their performance, let alone any background approaching your expertise in the matter. So at present, I am unable to digest them.

As a side note, it may be of interest for some readers to know or be reminded that the acoustic pressure is related to the time derivative of the acoustic velocity potential function, while the acoustic velocity is related to the spatial derivative of the same potential function denoted F. The potential function F is a pointwise function F(t-x/c) purely of the coordinate t-x/c in the case of a traveling plane wave, and F(t-r/c) in the case of a spherical (or cylindrical) wave. The d'Alembert traveling wave solution of the linearized (i.e. acoustic) governing equations does not require that the function F be sinusoidal wrt its argument. Indeed the sinusoidal nature of music notes and many other sounds arises from the sound sources - the nature of mass-spring type vibrations of elastic solids - but the ear/brain is able to parse the sounds into tone-mixtures by a process analogous to a Fourier transform. Other acoustic waveforms are also admissible solutions F() of the acoustic equations if one looks at broader applications and disregards audibility. Though the Klippel NAH machine may be designed to measure pure tones, music itself generally involves complex waveforms, and even a simple Mach wave with an infinitesimal step function dp in pressure is a valid solution of the acoustic equations. The form of the d'Alembert solution and in particular the dependence of F purely on t-x/c or t-x/r reflects the causality at the origination of the sound. The value of the potential F and its derivative, and thus of the acoustic pressure and velocity, depends only on the value of t-x/c or t-x/r, and not on earlier or later shape of the waveform (though some continuous or smoothness requirements apply). This causality, where a fairly general small-amplitude signal propagates at the local speed of sound, is in my eyes more fundamental in nature than phase differences that can be discerned in the patterns of periodic waveforms. I tend to mentally label t-x/c or t-x/r as the "phase" of the signal, though that is clearly a misnomer, and it should be labelled as something along the lines of "signal lapsed time or phase at location of sound origination". In this sloppy mental labeling, I tend to think of the pressure and velocity at any particular location and instant of time in a traveling acoustic wave prior to signal-altering interferences as being "in phase" (albeit their peaks may be out of phase as in the case of a spherical acoustic wave), on account of their having arisen dependently together simultaneously at their common point of origin and their being thus intimately related to each other.

Thanks again for your carefully reasoned reply and insights.
Best,
Ananda
 
Last edited:

Tom Danley

Active Member
Technical Expert
Audio Company
Joined
May 6, 2021
Messages
125
Likes
581
Hi Ananda
" I am neither an audio pro nor do I have any experience whatsoever in designing speakers and building them or analyzing their performance"


I am taken back a bit, you speak like a physicist working in the field, maybe reconsider your statement if life deals you a surprise hand.


You mention the pitot tube and velocity effecting the microphone. I think this is the first time i have heard a reference to this in 20+ years. I designed a sonic boom simulator once for some government research and the professor who was running it was measuring a higher pressure up close than the SPL at a short distance would predict. The output with a sine was was about 50 Cu/mtr/sec at 3 Hz and the air flow was impacting the microphone with kinetic energy.

DR A who was running the test called this "pseudo sound", the effect of kinetic energy at the diaphragm added to pressure. It would imagine that the near field scanner would also be in a region (up close) where that effect might take place.
Fwiw, it was Dick Heyser's goal to measure with both a pressure and velocity microphone.
Best
Tom
 
Top Bottom