• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

It's not compression ..... or is it?

  • Thread starter Deleted member 21219
  • Start date
D

Deleted member 21219

Guest
This song ..... "Peace Beneath the City" by Iron and Wine ..... drives me crazy. The percussion is sharp and clear up until 19 seconds in, then it's somewhat less sharp as the (synth?) comes in. Then at 36 seconds, the percussion is definitely reduced in level and clarity until 52 seconds, when the percussion almost disappears. From then on, the energy level of the recording fluctuates, and so does the level of the percussion; at times I can hear it, and at times it is audibly reduced to almost nothing by the dominance of other instruments.



One way to do this is gain riding the percussion track. I presume another is to use compression, but I don't understand how the limiter would be set to limit the percussion more than the other instruments and then recover.

This occurs in many other songs, also. For some reason, I never fail to notice it and it never fails to drive me crazy. It's most obvious with drums or similar percussion. Can anyone explain how this is done? Please use little words and simple diagrams; I'm old and analog. :D :)

Jim
 
Last edited by a moderator:

charleski

Major Contributor
Joined
Dec 15, 2019
Messages
1,098
Likes
2,240
Location
Manchester UK
It’s all quite straightforward. Each individual instrument (drums, guitar, synths, vocals) is recorded separately and placed in its own track. The level of each track can be adjusted independently and this can be automated so that decisions you made earlier will be repeated on later playthroughs as you work on other parts of the mix. In a modern DAW every parameter can be dynamically adjusted, and the adjustments made automatic. Furthermore, you can use one track as a control signal to adjust a parameter on another track (called sidechain processing) - for instance you can use the vocal track to control the level of another track, so the guitar is automatically turned down when the vocals come in (ducking).
 

HarmonicTHD

Major Contributor
Joined
Mar 18, 2022
Messages
3,326
Likes
4,835
Yes to what charlski wrote.

Adding to it … you not only can control gain by the means charlski described, but every effect and control parameter there is in the DAW. Eg. Filter center frequency, Q, delay, reverb, pan, compression, distortion, attack, and and and. …
 

HarmonicTHD

Major Contributor
Joined
Mar 18, 2022
Messages
3,326
Likes
4,835
Regardless of whether it's done by automation in the DAW or by a guy manually twiddling knobs at certain times during the mix I think the ovarall reason is 'artistic choice' ...
Yes artistic choice and making sure it also sounds good on the devices with the least fidelity eg radios, MP3 mobile players, public audio systems, car stereos, small TV speakers etc. (so no Audiophiles are generally not the main focus of the big studios, even if we like to think so).
 

charleski

Major Contributor
Joined
Dec 15, 2019
Messages
1,098
Likes
2,240
Location
Manchester UK
Amazing how things are done electronically and easily now, that were done 50 years ago, manually and with difficulty.
Automation has actually been around for a long time, and first emerged on analog mixing desks in the 70s. An old desk with a ton of moving faders is quite a spectacle when it’s in operation. But modern computer based DAWs make everything a lot easier.
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,334
Likes
1,484
This song ..... "Peace Beneath the City" by Iron and Wine ..... drives me crazy. The percussion is sharp and clear up until 19 seconds in, then it's somewhat less sharp as the (synth?) comes in. Then at 36 seconds, the percussion is definitely reduced in level and clarity until 52 seconds, when the percussion almost disappears. From then on, the energy level of the recording fluctuates, and so does the level of the percussion; at times I can hear it, and at times it is audibly reduced to almost nothing by the dominance of other instruments.



One way to do this is gain riding the percussion track. I presume another is to use compression, but I don't understand how the limiter would be set to limit the percussion more than the other instruments and then recover.

This occurs in many other songs, also. For some reason, I never fail to notice it and it never fails to drive me crazy. It's most obvious with drums or similar percussion. Can anyone explain how this is done? Please use little words and simple diagrams; I'm old and analog. :D :)

Jim

It's caused mainly by frequency masking.

When the percussion plays alone primarily, you can hear it without any interference from the other sounds, but when that synth starts playing at 19 seconds the percussion gets masked a bit. At 36 seconds in it gets further masked mainly by the bass guitar, and beyond that, the mix gets more intense and crowded and hides that percussion even more.

Believe me or not, that percussion is most likely playing at roughly the same volume throughout the whole song. :)
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,334
Likes
1,484
Wow. If these are true, then I'm starting to wonder about my hearing! :oops: Jim
It got nothing to do with your hearing, much of the sound of the percussion is masked by the other instruments.

At the beginning of the song, you will hear the complete full sound of that drum because it's almost alone in the mix. But when the other instruments (especially the bass guitar) start playing, they overlap most of the sound of that drum, frequency-wise, and the sound of it will be reduced to almost just the initial hit (the transient part of the sound).

This is one of the most problematic things with mixing music, especially music that has a lot going on with many instruments sharing the same frequency range, like rock music for example.
 

kongwee

Major Contributor
Joined
Jan 22, 2022
Messages
1,024
Likes
276
usual gain automation. Just adjust the volume fader manually for individual tracks. Compression usually serve for two purpose. To squish the sound till you hear distortion like e.guitar for example. Other is to limit transient so you don't need to draw the gain automation all the times.
 

majingotan

Major Contributor
Forum Donor
Joined
Feb 13, 2018
Messages
1,521
Likes
1,799
Location
Laguna, Philippines
Testing this from my distortion effect tube headphone amplifier and Dan Clark Aeon Open X. I use Qobuz through Roon for the test. I concur with

Believe me or not, that percussion is most likely playing at roughly the same volume throughout the whole song. :)

but I disagree with the quote below since I hear ZERO reduction in reverb in the claps and percussion throughout the track. Subjectively, the track is mastered PERFECTLY with the placement of instruments 100% completely separated in the illusionary space.
I don't even hear it as reduced. I seam to hear a reduction in the reverb send on those claps though...which might give that impression

ZERO compression whatsoever in my listening experience with superb dynamic range for this track.
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,334
Likes
1,484
If I were present for the performance (live) that it would have sounded the same way? Or is this just an unavoidable artifact of the recording process?

Jim
Frequency masking will occur live as well. But live you see the hits on the drum, and may even feel it as well thanks to the stronger volume, and those two sensations will make you “hear” the drum better.

When listening to the recorded music, we only have the sound cues to go on, and all the sounds are coming from just two sound sources, so the frequency masking will probably be more noticeable because of that.
 

dasdoing

Major Contributor
Joined
May 20, 2020
Messages
4,294
Likes
2,762
Location
Salvador-Bahia-Brasil
in order to really hear it you need a clean stereo image. that way there will be no masking going on since the engenier will be carefull to not overlap frequencies in the same pan region
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,334
Likes
1,484
In that video, the violin doesn't have to compete that much with the other sound objects frequency-wise. Frequency masking is a much bigger problem with the low-frequency range. It's possible some frequency masking is happening from 3:12 where the violin is playing softer, and the other string instruments playing at a more even level with the violin.



And for the "Peace Beneath the City", I'm not guessing it's frequency masking. :)

As you can see in the picture below of the right channel of that track, that percussion drum never reaches the limiting level of the track. You can clearly see the percussion level throughout the start of the track, I have put a circle around two of them. And the line shows you the limiting level of the track which is at a level that the percussion never reaches.

Iron & Wine Sample Right Channel Mono 1.png



Maybe you can hear it better in the sample in the dropbox link, I hope it's okay because it just contains the right channel in mono to the phantom center.
You should be able to hear the percussion better as it is mostly in the right channel. It is still masked but not as much, because the frequency-competing bass guitar is just reduced by half the level by only playing the information from the right channel. If the bass guitar was completely removed from the mix you would hear that the percussion is played at around the same level throughout the full track, except that it may be other sound objects in the mix that are also competing frequency-wise.
 
Top Bottom