• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Free DIY Audio System Improvement Method (Custom EQ)

BenB

Active Member
Joined
Apr 18, 2020
Messages
284
Likes
446
Location
Virginia
No matter how good your audio reproduction system is, there's no denying that some recordings just have a poor spectral balance. It can really take away from the enjoyment of listening to some otherwise good music. Since there isn't consistency, these have to be corrected individually. I will briefly explain my process for identifying my desired "corrective equalization", and saving the results for future listening. (Since you don't want to have to re-identify the proper EQ every time you listen.)

I primarily use Audacity to do this, because it gives me an interface that is efficient. I start by using "plot spectrum" in the analyze menu. Often times I can identify issues with the spectrum immediately. For example, in this classical music piece, I found weak bass below 200 Hz, and an elevated plateau between 1 kHz and 5 kHz.

Spectrum_Orig_Annotated.png

In the effects menu there is an equalization function. I use this to test out various levels of equalization. After identifying a problem with the EQ I applied, I always undo the EQ, adjust the filter, and apply EQ again. That makes it easy to check what the original sounded like for a reference. In this case, my EQ looked like this:

Symphony1_EQ.png

As I test out prototype equalization filters, I use the waveform view to listen to different parts of the song/piece. I find that I focus almost exclusively on the loud parts, as errant spectral balance isn't nearly so offensive in the quiet parts.

Audacity_Waveform_view.png

I don't apply different EQ to different parts of the song, though that could be done. I mostly listen to classical music, so typically there's one transfer function for the whole piece, and it's something I object to regarding that function that I feel the need to correct, so my correction is universal for that piece. Studio mixes may not follow that trend.
In this case, my resultant spectrum looked like this:

Spectrum_EQ.png

It's a significant subjective improvement over the original, and it looks smoother as well. I export the equalized music to a new music file.

At that point I look to verify my work. I load the original song and the equalized version into foobar for AB comparisons. I like the interface foobar provides in their ABX testing module, which allows you to either continue the music when you switch, or return to a specific time and play from there. When I'm doing this listening, I'm asking myself which version sounds like it has been equalized. If I've done my job right, the equalized version sounds natural, and the original version sounds like it has had a bad EQ applied to it. I also ask myself if any change could make the music sound more natural or pleasant than my equalized version, and if so, I start the process over again with insight from my first attempt.

In the end I have a new version of the music that is about as natural/pleasant on my audio system as I can get it. From that point on, I listen to the EQed version and never touch the original again.

I have gone through this using my speakers as well as a nice set of studio headphones for playback. I've found that I can stray too far using headphones, and I'm more efficient / reliable using my speakers. I often do perform another check to make sure the result is good on my headphones as well, though.
 

bigjacko

Addicted to Fun and Learning
Joined
Sep 18, 2019
Messages
721
Likes
359
This is what I always wanted to do, but it is a lot of work even for only one song. This will not be a good way to through the whole playlist. Doing it the whole list probably will need some type of alogrithm or AI, but we will have to wait someone to do it......
 
OP
BenB

BenB

Active Member
Joined
Apr 18, 2020
Messages
284
Likes
446
Location
Virginia
This is what I always wanted to do, but it is a lot of work even for only one song. This will not be a good way to through the whole playlist. Doing it the whole list probably will need some type of alogrithm or AI, but we will have to wait someone to do it......

I've considered making attempts to automate the process before. In signal processing we have the concept of prewhitening. Often times it is used to flatten the long-term spectral trends, while still allowing short term trends to vary (there are numerous ways to whiten data, though). Sometimes I've laughingly thought of applying an adaptive "pinkening" filter to music. I suspect it would be just about impossible to get right reliably though.

I've re-balanced a few dozen songs / pieces so far. The better the music, and the more dire the need for equalization, the more likely I am to go through the trouble. I also wrote my own noise reduction algorithm, as well as a function to split music into tonal and atonal components. Doing that allows me to go in and edit things like coughs and page turns out of the atonal part, without affecting the musical content that is found almost exclusively in the tonal part. It's a cool trick that probably ought to catch on.

At any rate, I wanted to post and share a few of my tricks in order to help reduce the amount of work it takes for someone starting out. Anyone who's gone through the Harman training will have an advantage, because they'll be able to identify what frequencies are to blame for any unpleasantness they are hearing.

That brings me to an interesting point. I mentioned that I typically let the loud parts of the music drive my EQ decisions. This is based on my preference that musical reproduction should never be unpleasant. There are rare instances that when I go back to A/B my equalized track vs the original, the original might be better for a large percent (possibly even a majority) of the time. For example, I may have reduced the treble output so that the track is less piercing when it gets loud. Well, perhaps that extra treble detail is nice during the quieter times. This has relevance for A/B and ABX speaker comparisons as well. The speaker that "wins" an A/B comparison with another speaker the most often may not be the one that would be the most pleasant to live with long term. I suspect this happens in showroom comparisons fairly often, and I've heard it referenced here that the showroom "winner" may often have a slightly rising response.
 

bigjacko

Addicted to Fun and Learning
Joined
Sep 18, 2019
Messages
721
Likes
359
I've considered making attempts to automate the process before. In signal processing we have the concept of prewhitening. Often times it is used to flatten the long-term spectral trends, while still allowing short term trends to vary (there are numerous ways to whiten data, though). Sometimes I've laughingly thought of applying an adaptive "pinkening" filter to music. I suspect it would be just about impossible to get right reliably though.

I've re-balanced a few dozen songs / pieces so far. The better the music, and the more dire the need for equalization, the more likely I am to go through the trouble. I also wrote my own noise reduction algorithm, as well as a function to split music into tonal and atonal components. Doing that allows me to go in and edit things like coughs and page turns out of the atonal part, without affecting the musical content that is found almost exclusively in the tonal part. It's a cool trick that probably ought to catch on.

At any rate, I wanted to post and share a few of my tricks in order to help reduce the amount of work it takes for someone starting out. Anyone who's gone through the Harman training will have an advantage, because they'll be able to identify what frequencies are to blame for any unpleasantness they are hearing.

That brings me to an interesting point. I mentioned that I typically let the loud parts of the music drive my EQ decisions. This is based on my preference that musical reproduction should never be unpleasant. There are rare instances that when I go back to A/B my equalized track vs the original, the original might be better for a large percent (possibly even a majority) of the time. For example, I may have reduced the treble output so that the track is less piercing when it gets loud. Well, perhaps that extra treble detail is nice during the quieter times. This has relevance for A/B and ABX speaker comparisons as well. The speaker that "wins" an A/B comparison with another speaker the most often may not be the one that would be the most pleasant to live with long term. I suspect this happens in showroom comparisons fairly often, and I've heard it referenced here that the showroom "winner" may often have a slightly rising response.
Wow, very nice work by you, I hope one day you can get your tricks done and show us some interesting stuff. You mentioned you did reducing noise and edit out unwanted sound, is it your idea or already existed? About prewhitening, it is flatening the long term, I thought flatening the short term where the loudest part or most intense part is better. What do you yhink about it?

How much signal process can you do? You said you can split music into tonal and atonal, what are other things possible to do? I think some recording are bright is because maybe one instrument or vocal being too bright, is it possible to only eq one instrument?
 

abdo123

Master Contributor
Forum Donor
Joined
Nov 15, 2020
Messages
7,425
Likes
7,941
Location
Brussels, Belgium
, I found weak bass below 200 Hz, and an elevated plateau between 1 kHz and 5 kHz.

On what basis did you figure that out? No offense but if you want to modify the mix in a meaningful way you have to justify your choices.

Otherwise you're just doing a remix but using a lot of fancy words for it. Which is fine ofcourse but this forum is focused on reproduction of music rather than production.

In a nutshell is this in the context of 'restoration' or remixing?
 
OP
BenB

BenB

Active Member
Joined
Apr 18, 2020
Messages
284
Likes
446
Location
Virginia
On what basis did you figure that out? No offense but if you want to modify the mix in a meaningful way you have to justify your choices.

Otherwise you're just doing a remix but using a lot of fancy words for it. Which is fine ofcourse but this forum is focused on reproduction of music rather than production.

In a nutshell is this in the context of 'restoration' or remixing?

For anyone who's interested in what to expect to find for a musical spectrum, and perhaps a target to shoot for, you can use figure 2,4,5, and 6 in the following paper:
"Long-term Average Spectrum in Popular Music and its Relation to the Level of the Percussion" by Elowsson and Friberg.


Figure 5 looks like this but with more bandwidth and axis labels (10 dB per tick on the y axis):
Two-quadratic-fittings-blue-and-black-overlaying-the-mean-LTAS-grey_Q320.jpg



Additionally, or alternatively, you could attend live performances of music in the same genre and rely on your ears. Personally, I've been to the symphony twice in the last 3 weeks.

While it's probably true that my adjustments typically bring things more in-line with genre norms, and from that perspective could be seen as something of a restoration, I think such an assessment misses the point. The point is about enjoyment. I have built a system based on my insights and interpretations of the science available. Others have done the same. But annoyingly, that doesn't necessarily lead to optimum enjoyment of every bit of music. For a long time I allowed myself to be entirely at the mercy of the recording engineers and mixers, and I'd often find myself deciding whether I wanted to listen to music I really liked, or music that was very well recorded, because the intersection of those groups was too small.

There's only so much I can do with stereo music products that are already mixed when I find the mix to be poor. But I have been able to make changes that are thoroughly appreciated by 100% of the intended audience, which consists of me, myself, and I. I suspect that others here are capable of doing the same, and perhaps they would be inclined to do so with a few recommendations regarding tools and process.
 
Last edited:
OP
BenB

BenB

Active Member
Joined
Apr 18, 2020
Messages
284
Likes
446
Location
Virginia
Wow, very nice work by you, I hope one day you can get your tricks done and show us some interesting stuff. You mentioned you did reducing noise and edit out unwanted sound, is it your idea or already existed? About prewhitening, it is flatening the long term, I thought flatening the short term where the loudest part or most intense part is better. What do you yhink about it?

How much signal process can you do? You said you can split music into tonal and atonal, what are other things possible to do? I think some recording are bright is because maybe one instrument or vocal being too bright, is it possible to only eq one instrument?

Thanks for your interest. I don't have many tricks, but they are done and they do work. There are plenty of noise reduction algorithms on the market. In all honesty, mine works only slightly better than the built-in version in Audacity, which runs 50x faster than mine. For me it's worth it to wait, though. I have a low tolerance for noise in my recordings, but an even lower tolerance for artifacts. Similarly, there are programs that allow for editing out unwanted sounds. By separating the tonal and atonal components, I prevent "ducking" (temporary lowering in level) of the tonal components as a result of my edits, which is sort of a big deal.

Regarding source separation, that's more of an artificial intelligence problem than a signal processing problem. To my knowledge, no one has really mastered it, but progress is pretty consistent. There are programs that are pretty good at separating voice from other sounds/music. Here's a pretty good article about generalized source separation:


I agree that often times the culprit for unpleasantness in music may be the way a particular instrument was recorded or mixed, and being able to isolate that instrument would be a huge benefit to fixing the issue. Unfortunately, that's a problem I have no solution for. All I could do would be to ask the producers for digital copies of the masters.

If I'm understanding your whitening comments, it sounds like you are suggesting spreading the bandwidth of transients... is that right? I've never felt inclined to do that, but I'll give it some thought.
 

bigjacko

Addicted to Fun and Learning
Joined
Sep 18, 2019
Messages
721
Likes
359
If I'm understanding your whitening comments, it sounds like you are suggesting spreading the bandwidth of transients... is that right? I've never felt inclined to do that, but I'll give it some thought.
Thank you very much for the detailed response. What I mean was why whitening the long term? Some part of the song maybe the drum is not playing, then the long term spectrum will be towards the bright side. If we then make that long term spectrum back to neutral, we have to bring thewhole song darker, which is not what we want.

I was thinking when there is problem at some instance, only fix it at that instance, don't fix other places if they are good.
 
Top Bottom