• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Amplitude, Frequency and Phase

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,289
Likes
17,320
Location
Riverview FL
What then is the purpose of phase inversion switches on some DACs and Preamps - is that related more to things down the chain, like things going on with the speakers?

To invert the signal of one or both channel signals.

Invert one channel
To correct a polarity error in one channel or to play with.​
(I have a cheap 3-CD set of piano where one channel is inverted throughout. If I play it, I need to invert one channel, otherwise it sounds terribly wrong)​
If the channels are combined, after inverting one, some (stereo source) or all (mono source) of the signal may be cancelled (null test).​
In a stereo recording, combined to mono, a signal panned dead center can (to a large degree) be eliminated - think "vocal remover" if you want to sing along and not hear the original vocalist.​
Gives a funky sense of stereo expansion or other possibly amusing effect.​
Invert both channels

To correct polarity of the signal when you think it makes a difference whether it is coming out of the speakers inverted relative to the source waveform or not. Some gear (inverting) will "invert" the signal it receives. Other gear will not (non-inverting)​
If the signal at the source is positive (microphone diaphragm compressed), the speaker driver should, at that instant, be moving toward the listener, in the process of creating a pressure (rather than a rarefaction). This leads to the question of signal inversion at the microphone, though.​
---

Audacity will permit you to play with channel inversion if you want to experiment and you don't have mono or phase (inversion) switches to fondle.

---

A discussion - https://www.stereophile.com/content/listening-tests-and-absolute-phase
 
Last edited:

JoachimStrobel

Addicted to Fun and Learning
Forum Donor
Joined
Jul 27, 2019
Messages
519
Likes
304
Location
Germany
Just wanted to say, that a piano note is created by up to three strings being hit by the hammer and are tuned to modulate the sound slightly. And the strings are up to a meter and more long. A single mike will never capture what you hear, may be a piano miked with an Atmos lay out will... A guitar is simpler, but then look at a Hapsichord....
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,289
Likes
17,320
Location
Riverview FL

MechEngVic

Active Member
Joined
May 15, 2019
Messages
174
Likes
155
I think it's important to mention that phase issues are constantly encountered during the recording process and are dealt with in much the same way that speakers deal with phase issues and their effects (Comb filtering, cancellation). Most modern recording techniques allow for the manipulation of individual tracks to be adjusted for phase when comb filtering and phase cancellation take place, both by multiple mics picking up the same signal or by separate instruments coming into phase.
 

Frank Dernie

Master Contributor
Forum Donor
Joined
Mar 24, 2016
Messages
6,472
Likes
15,873
Location
Oxfordshire
I have a few questions about these topics in relation to recording musical instruments and audio playback. As a simple example, lets start with a piano playing the note middle C and then a few moments later a guitar playing the note middle C. Lets assume both instruments are recorded with the same microphone. It seems to me that middle C determines the frequency, so both instruments would be playing the same frequency. It also seems clear to me that both instruments could play at the same amplitude, if need be for a test. If the same microphone is used, does that necessarily mean they will be recorded with the same phase? Then when playing back a recording of these events, is it possible that both the piano sound and guitar sound could have identical amplitude, frequency and phase? And if that is true, how could one distinguish the piano note from the guitar note? Thanks in advance to all you people out there more knowledgeable than me, who are going to answer this correctly,
Phase is only one of the properties of the signal.
It is not key in most ways, IME, unlike amplitude and frequency.
Instruments sound different to each other because of the number and amplitude of their overtones and the attack and decay of their sound. Both these depend on the physical aspects of the instrument and also the way it is played. There are big differences between different classes of instruments and subtler ones between nominally the same ones. That is why a Steinway piano sounds different to a Bosendorfer and Andras Schiff sounds different to Artur Schnabel.
Probably no multi track mixed down recordings remain phase coherent so probably the only recordings one listens to which are were recorded a long time ago with two microphones.
About 45 years ago I fed a square wave generator into my stereo and listened to the output whilst adjusting phase. It continued to sound like a square wave even when the signal was no longer square on the 'scope, ie the spectral content and amplitude were what I could hear and not the phase.
Most speakers are not phase coherent either.
At a demo much more recently of DSP crossover and correction in loudspeakers I did hear a difference when phase was corrected but only on an old simply miked comic opera recording, on a modern rock music recording any difference was minimal.
In principle reproducing the phase accurately is, of course, desireable and one would expect stereo imagery to be affected but very few speakers or recordings are phase accurate in the first place.
So whilst the signal from an electrical component in a stereo has amplitude frequency and phase I have not found the phase to be as relatively important.

Since the relative level of the overtones define the timbre of an instrument any change in this, such as harmonic distortion, will change the timbre. How much is enough to change the timbre enough to be audible has been the subject of debate for a long time...
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
401
very few speakers or recordings are phase accurate in the first place.
So would it be a worthy endeavor to try to find speakers or powered monitors that are phase accurate? Or conversely, since recordings are likely not phase accurate, don't worry about it? Are there any actions I should take related to phase, other than experiment with my phase inversion switches on my DACs? A company called Event sold a pair of powered monitors a while back that they advertised as being both phase and time aligned, and they weren't all that expensive. Someone made the comment that this would be true only in a test chamber. I did buy a pair of these speakers, but did not find the sound as pleasing as some of my other powered monitors.
 
Last edited:

Killingbeans

Major Contributor
Joined
Oct 23, 2018
Messages
4,106
Likes
7,628
Location
Bjerringbro, Denmark.
So would it be a worthy endeavor to try to find speakers or powered monitors that are phase accurate?

I'm no expert on speakers, but judging from what little I know, I wouldn't worry about phase accuracy. It doesn't hurt, but there are a myriad of other things that have far greater audible impact.
 

Killingbeans

Major Contributor
Joined
Oct 23, 2018
Messages
4,106
Likes
7,628
Location
Bjerringbro, Denmark.
I know a little about a lot, but I still have oodles to learn. Mostly I just try really hard not to talk out of my ass :D
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
401
I know a little about a lot, but I still have oodles to learn. Mostly I just try really hard not to talk out of my ass :D
Yeah, I have made that mistake big time and many times over. Embarrassed myself trying to use my own reasoning about preamp design, something I obviously knew nothing about. But hey, learning the hard way, if I still learn -- not the end of the world.
 

Killingbeans

Major Contributor
Joined
Oct 23, 2018
Messages
4,106
Likes
7,628
Location
Bjerringbro, Denmark.
True, it's only human. As long as you don't make ass talking your way of living, it's all good.

A great litmus test I've learned to use so far: If an "expert" seems to get tremendous pleasure from hearing the sound of his/her own voice, then chances are that person has no problem letting his/her a**s do the wording, and most (if not all) of the eye-opening revelations, they offer you, can safely be disregarded as complete BS.
 

Bob from Florida

Major Contributor
Joined
Aug 20, 2020
Messages
1,339
Likes
1,231
Guitar and Piano - different instruments with different harmonic content. Same notes sound different.

Amplitude, frequency, and phase in a stereo system - why is accuracy important? I will make an assumption this is your question.
We cannot change the recording so we will concentrate on playback.
Amplitude - system as a whole must not clip while accurately amplifying the signal equally across the bandwidth desired. Limits apply from preamps to the loudspeakers.
Frequency - reproduce the recorded music without added harmonics. There will be harmonics so we minimize this.
Phase - keep all frequency phase shifts equal across the bandwidth. Phase affects focus of the stereo image. Harmonics phase shifts should be in phase with the fundamental frequency. Nelson Pass has an interesting paper on this on his First Watt site. If I understood Nelson correctly, the second harmonics phase affects whether the stereo image depth is forward of the speakers or behind.

More complicated than this but perhaps this is a start to answering your question.
 

dfuller

Major Contributor
Joined
Apr 26, 2020
Messages
3,449
Likes
5,342
Okay, so:
1. No. Phase is a relation of the waveform in time with regard to another waveform, and has more to do with the distances of the microphones from the source than anything else (on the recording side).
2. The things that differentiate instruments from one another are their harmonic content (specifically, the spectral centroid plays the largest role) and the ADSR envelope. That is why a guitar playing a middle C sounds significantly different from a piano playing a middle C sounds significantly different from a saxophone playing a middle C and so on and so forth.
3. On the recording side, the use for polarity inversion switches is to (roughly) correct for phase discrepancies between two or more microphones, or to build a mid-side matrix (the "side" channel is a bidirectional mic, and the signal is duplicated and one is polarity inverted, effectively "splitting" the two lobes of the mic's pickup pattern).
 
Last edited:

Hipper

Addicted to Fun and Learning
Joined
Jun 16, 2019
Messages
753
Likes
626
Location
Herts., England
As I added room treatment to my room not only did the dB/frequency graph change but so did the time/frequency and phase/frequency response graphs.

These are from REW before and after I added all Soffit Bass traps. They show the Frequency Response and Phase with no treatment, adding lots of bass traps, and finally adding other treatment such as to prevent side wall reflections plus some EQ in the 0-200Hz range:

FR P - No Treatment.jpg

FR P - All Soffit Treatment.jpg

FR Phase - All Treatment plus EQ.jpg

The phase is shown 'wrapped' - the lighter red and lighter green lines. Because the phase changes are large over a big range of frequencies the dotted lines show when it has moved by 180 degrees. If this wasn't done the axis on the right would be extremely long.

What the graphs show is that by absorbing sound energy so smoothing the frequency response, the phase is also smoothed. According to Floyd Toole we prefer smoother phase.
 

Hipper

Addicted to Fun and Learning
Joined
Jun 16, 2019
Messages
753
Likes
626
Location
Herts., England

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,861
Likes
6,412
Location
Berlin, Germany
OP's question is viable IMHO. Generally, when two instruments or voices intonate the same note -- and let's consider a strict unisono section of a piece for simplicity -- the phases of the fundamentals will be wandering around as will amplitude. No tight correlation per se.
But with very good musicians and excellent acoustic conditions there is often a tendency to decrease phase differences to a level where the resulting beating (and "chorus" effect) is musically appropriate and meaningful. Good singers in a group are known to actually go in phase lock with each other, at least partly, on long ostinato notes.

So, yes, it is entirely possible, though quite unlikely, that amplitude/phase specificly of piano and guitar playing the same note are in sync. Obviously, even when they are not locked, there is no way to know which fundamental belongs to which instrument from taking a snapshot of amplitude and phase values. We have to look at the bigger picture for that, the specific tonal and dynamic signatures. To my knowledge, today we have successful software approaches to "identify" and isolate (or mute) instruments in a bigger orchestra, with quite a bit of AI and brute force CPU power involved.
 
Top Bottom