I have an impression that mastering engineers design their own gear, use rare in-house devices or analogue gear, and yet start from scratch all the time when thinking about how a particular project can be tweaked to sound as good as possible with the minimum of intervention.
Many of them have authored books. Online courses to impart some skills have been created.
In the past, recording studios were far more popular but my impression of recording and mixing is that it is difficult and complex, and seems far beyond the reach of many casual listeners.
At this point, automated mastering services are appearing but regardless of that, many content creators will also opt to 'master' audio on their own.
Is it possible for any audiophile to practice what a mastering engineer does? Can a person, with or without studio working experience, set up a good studio environment according to the books, use smaller but well-performing studio monitor speakers e.g. Adam Audio etc, and such technology as mastering software, plugins, studio monitor headphones etc to master their own streaming content? Aren't the audiophiles who consider things to the highest and deepest levels those who are basically mastering engineers even if they don't work professionally as mastering engineers?
If this is the case, mastering engineers will quickly become unnecessary, since AI and amateurs can easily replace the occupation.
It is arguable that digital plugins, with much trial and error, is capable of recreating what was only possible with analogue gear.
It seems it takes more than just an audiophile to make a mastering engineer. Mastering requires a kind of objective stance - if it's your own content it may be difficult to make it its best. Professional mastering would best work if the person has a grasp of wide ranging kinds of music, an ability to enhance the emotional experience, and not focus solely on 'sound quality' too much if that detracts from emotion. The person has to have experience as a player of an instrument, and studied electrical engineering etc. Not merely a passive listener.
Overall, since mastering as a profession has become firmly established in the 20th century, the only way people can enter that field is by working at one of the already existing studios. Obviously such skills as vinyl authoring are only possible to gain there. In the digital realm, trial and error could get you to the same level perhaps, but that is limited by how much skill+experience you already have in working at a proper studio. Most mastering engineers would need to be in a studio affiliated with or directed by a larger corporation to remain in business without too much trouble.
But let's throw common sense out the window. Ignoring the need to have speakers which you understand deeply and can work by analogy to imagine even larger speakers, let me ask what DAC does a mastering studio use? Can Sonata HD Pro suffice? Can Audacity be mastering software? Can studio monitor headphones like Sony's M1ST, Yamaha's MT8 or Audio-technica's ATH-M series suffice? And ultimately, if most consumers use the most commonplace worthless gear to listen, isn't it proper for the mastering to take place using just a DAC and headphones? In most cases there'd likely be no issues with how it sounds on a radio, a TV, a car, or a high end setup, although one could simply test it out rather than rely on analogy and experience with professional studio speakers.
Can every audiophile be a mastering engineer with only a Sonata HD Pro, a computer, and ATH-M40x?
Hi Saidera.
I can only speak from my own experience which 54 years as a musician, 50 recording at various times, 25 mixing, 12 mastering and 48 years as a Electrical/Elctronics Technologist.
Stereo music playback (and surround) is a wonderful illusion that tries it's best to trick our brain and ears into making the equipment disappear and the music enjoyable and immersive.
You can't use headphones as they block the sound that naturally is heard a little later and a little softer by your ear on the opposite side of the sound source. See Head Related Transfer Function (HRTF).
The bass region on both sides is the “wild West” of sound. Firstly everyones listening room has huge effects on it and everyones speakers have huge effects. Second, the same can be true for 99% of the people mixing the music. Their monitors and rooms are hardly ever anywhere close to neutral is the bass region.
Both homes and studio's have a fair chance at the upper bass, midrange and high end, but most do not truly have speakers, room treatment and rigorous verification measurements to be confident of what they are hearing.
How can you measure that 2 x4 to 112 inches and 3/32 when your tape measure only has 1/4 inch markings?
Then when you have a trustworthy, high resolution, verified, resolving bass reproducing mastering playback system, what is the target when the listener still doesn't? It is accurate and controlled bass. People who master for vinyl have to sum the lower bass and heavily compress it so that the master can be cut and anyone with a turntable has a chance of playing it. With home studio's, people have the tools that can saturate lower bass easily and store it digitally, there are no limits that they have to meet to keep needle tracking in grooves. They can put out anything into the Wild West, unless they know what is truly there and how to control it.
Then I see two different streams of recording, music creation and Mastering.
One is a world where from the performance, recording, to mixing and then Mastering the object is to get the listener to hear as close to the original performance as if they were in the best seat in the concert hall or venue where it took place. To “close your eyes and be there.”
The second can be a 100% creative adventure or a hybrid. For example the performances are in one or more studio's, the performance peices can be creatively manipulated afterward such as re-amping an electric guitar with different effects than when it was recorded, putting on artificial reverberation, changing the pitch of the instrument, auto-tune a voice, making multiple copies of the instrument with various delays added and on and on.
In the first case, my favourite, I believe that a Mastering Engineer who has musical experience playing and performing, has experienced various concert venues, has been in the audience at various venues, who has continuously refined their knowledge and craft of capturing live performances and has an thorough understanding of the physio-acoustics and the tools and technique that effect it (not just glossy plug-ins found in magazines and buzz words) is required to listen and understand why the mix sounds like it does and what they can optimize it to bring it closure to that wonderful illusion of “being there.”
All without becoming a musical influencer or creative cog by making the flute sound like a recorder or making the band sound like it is in a medieval church when it was at an outdoor band shell.
In the second case the Mastering engineer must still understand what is appropriate, must ask for the producers concept if it isn't obvious, must listen to the producer for instruction on the form and then can aim for the true target.
In both cases if it is an album (not a single) then the order of the tracks must be known so that consideration for the natural flow and transitions between songs is correct, the music sounds like it is logically in the proper environment ( church to a basement mix??) and that when tweaked it doesn't reveal a mixing weakness now made more noticeable from the clarity brought about from Mastering. This happens a lot and requires being sent back to the mixer for adjustment and if done properly, requires detailed communication between mixer and Mastering Eng.
This brings a comment about AI and flashy Mastering Plug-Ins.
How can AI know all of this and be able to listen to producers, writers, musicians and mixing engineers? How can it know what environment is appropriate?
When a customer arrives to listen to their Mastered project for the first time, I start by playing their first track as mixed and sent to me for Mastering. 100% reaction has been “wow it didn't sound this great when we were mixing it on.....” and “I can hear the detail on....”
Next I play them the same track as Mastered. 100% of reactions have been a 360 degree head spin while looking between their band mates, producer, mixing engineer and me with their mouth hanging open.
Then I play them the complete project in order, with their requested pause interval between tracks (normally 1 or 2 sec).
So that's an insiders observations.
Cheers and Stay well.
Tom eh
Last edited: