• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Avantone CLA-10 (Yamaha NS-10M Clone) Review

Rate this studio monitor

  • 1. Poor (headless panther)

    Votes: 153 90.0%
  • 2. Not terrible (postman panther)

    Votes: 7 4.1%
  • 3. Fine (happy panther)

    Votes: 4 2.4%
  • 4. Great (golfing panther)

    Votes: 6 3.5%

  • Total voters
    170

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,851
It is not either or. Could be both.
Of course If the recording Industry get to know exactly what the results will be at the output they will be more than happy to comply, in fact it will be a great Burden removed from their shoulders! The consumer gets to really hear what I mix! Hell yeah let's pop the Champaign!

It's probably pointing out the greatest source of despair of these artists/engineers. Working so hard on something that gets butchered by poor reproduction systems.
 
Last edited:

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,079
Likes
6,959
Location
UK
Good grief. Nothing like that is proposed. This is a proposal for new content, not retroactive to what is there already. Music is produced and mastered to sound great on a full range, performant monitor (with specified specs for in-room response). A new version is derived from that based on player equalization. The metadata that does that is used in the studio to approve that version as well.

As an example, you could have a profile for a phone that would only play from 300 Hz to 8 kHz. The content producer would be then apply an EQ profile to the full range version above to get the most optimal performance for such a device. That response would be simulated by the DAW and playback system would mimic the same.

Again, all of this is done in Dolby encoding when content is downmixed. The creator approves stere, 3.0, 5.1, etc. Nothing is done with AI, etc.
This is the other big problem with the industry. They continue to think there needs to be one mix/master for the entire world. That made sense when labels had to make a CD for everyone to play. Today, labels don't even distribute content. And everything we consume is digital. Yet the production is still stuck on one version. Major streaming platforms now provide "high-res"/lossless tiers. Why not have that be the absolute best fidelity and not dumbed down for masses? This is why we suffer with loudness compression.

For a while there was talk of high-res distribution being better but that seems to have disappeared and we are back to one mix/master for all.

It is stuff like this that makes me wonder if this industry really cares about quality. For video, we can scale from DVD 480p, to blue ray at 1080p and UHD at 2160p (and digital versions of the same). But for audio, despite massively small file sizes, folks insist on force feeding all of us the same bits.

So no, we don't want to hear about constraints when the technology barrier is no longer there.
It would be great to have these things implemented in audio, but I'm thinking it would be a big endeavour to get it all joined up to align a device somebody is using to automatically play the correct version of the track (or apply EQ to the original full range version to adapt it for the device). The device would have to communicate to the player what it's audio capabilities are, and then the player would either choose the correct version of the track (or apply optimal EQ to the full range version of the track). That would mean all devices would have to go through some kind of accreditation process for their audio capabilities so that it could communicate with accuracy to the player what it's real audio capabilities are. Software players or OS designers would then have to comply with that accreditation process. So it would have to be a joined up process between device manufacturers & software developers of players & OS through to the music creators themselves. It's a pretty extensive joined-up process, how easily can it be achieved and how do you get all parties to work together on it? I'd like to see it happen. (If something like that could be relatively easily achievable it's seems only a small extra step to agree/create an optimal standard (a named standard) for what studios & consumer gear should aim for in terms of an optimal full range frequency response - eg Anechoic Flat & maybe some other associated parameters.)
 
Last edited:

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,079
Likes
6,959
Location
UK
I’m not sure what the benefit of creating a specific phone mix that only included 300Hz-8kHz is?

Wouldn’t that be a disincentive for phone manufacturers to improve how low and high their phones went?

For example, Apple has started using miniature dual opposing “sub woofers” in their latest laptops to improve the frequency response. I can see that technology trickling down to iPhones in the future.

If the phone standard didn’t include content below 300hz, then the standard would need to change whenever technology improves. So for every years iPhone update?

And if one manufacturer comes up with a new technology that improves the bass response, but other manufacturers can’t use that patent what happens then? Or does each different device get a standard preset?

perhaps no one would bother improving the base response of future phones if the standard phone mastering setting was implemented and not adjusted regularly?
I don't think it would stimy the incentive for hardware manufacturers to improve the quality of their playback devices, because good quality audio is judged by the listeners, and if they develop for instance something with a flatter frequency response and a greater frequency range then that will also be noticed in sound quality improvements by the end user, which result in better reviews & better sales of that device. In order for that to happen in the framework of "which version of mix to choose", then the accreditation standards would have to be quite fine grained and allow for iterative improvements in usable frequency range of a device, so it might not be good enough to say that there is a phone mix, or a laptop speaker mix, because it's possible not all those devices are created equal in their abilities. On the flip side of that you maybe can't have too much fine-grained ability for the mix to be altered by the device if you want to also say that the creator has to approve each of the various mix versions, unless they approve that their mix can be used on phones anywhere from say (making this up) from [100Hz-300Hz] - [10000Hz-20000Hz] with those being the flexible lower & upper limits respectively for example.
 

Mnyb

Major Contributor
Forum Donor
Joined
Aug 14, 2019
Messages
2,880
Likes
4,050
Location
Sweden, Västerås
Why master for who that does not care for sound quality at all ? Just master for best possible playback , the other users seems to listen to any kind of sound quality anyway regardless ? lesser playback systems need only to apply some device specific compression and EQ and we are done , I suspect your generic BT speaker does this already .

Another data point Ace of Base "The Sign" the worst song i know in every aspect , it also sounds terrible on everything (it's sickenly bad sound ) ! If this is what translation does to production values I don't want any of it ! It need to be completely different mixes of tracks not only EQ/Mastering if things like this is going to be saved .

it has exactly the kind of bass the everyone says they try to avoid but every pop song has anyway the generic "thump" that goes on instead of actual drum and bass instruments ? why does rock and pop have to sound like that ? mud anyone ?
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,378
Likes
1,561
@Blumlein 88

Earlier in the thread, I said that a good mix translation will make the mix sound better on both the best flat-sounding and full-range speakers out there, as well as more limited speakers. Good mix translation is not about making the mix sound better on the lesser speakers and introducing artifacts and obvious compromises when listening to the mix on the better speakers, otherwise, it can simply no longer be called "a good mix translation".

Just because you use a frequency-limited loudspeaker as the NS-10s at some point of the mixing to make sure that all sound objects in the mix shares the right balance in the midrange frequency area, that doesn't mean that the mix will sound worse on the best flat-sounding full-range speakers, quite the opposite the mix will most likely sound better on those speakers too.



I am listening to the old classic songs "Born in the U.S.A." by Bruce Springsteen and "Start Me Up" by Rolling Stones right now to hear if I can pinpoint any artifacts that we can blame on the use of NS-10s for those two productions. I am flipping between listening to my full-range HiFi system and my iPhone's inbuilt speakers, and I think these mixes translate well to the very limited speakers in the phone, nothing totally disappears, and the balance between the sound objects in the mix stays pretty much intact, the punch of the kick drum is still there, and the bass guitar is also heard and is not totally disappearing.

I can't hear anything wrong with any of those songs on my full-range system and they are most likely sounding even better thanks to the balanced-sounding midrange, that may have to do with the NS-10s being used for those productions for making sure of the balanced midrange, a balanced-sounding midrange that will also make sure that the mix translates well to the lesser speaker systems.



Many people here seem to think that making sure the mix sounds good on the lesser speaker systems will automatically mean a bunch of compromises to the sound quality for the best full-range speakers, but that is not the case with a good mix translation.

You could argue that you could take a frequency-wise better-measuring loudspeaker than the NS-10s, and just limit the bandwidth so that you mostly hear the midrange while checking the balance between the sound objects in the mix, and you could probably do that. But the NS-10s seem to have had a few more good qualities than that, such as a short decay time that lead to a perceived good transient response, so make sure your "better" speakers can compete with those qualities as well.

We all can keep on arguing back and forth why so many mixing engineers thought the NS-10s was such a good tool for making their mixes translate so well, according to them. I think these (in many ways) lousy measuring speakers really had a combination of some good quality aspects that simply worked well for finding faults in the midrange, such as inaccurate reverb decay times and suitable attack and release times for compression, and helped the mixing engineers to hear when the balance was good between the sound objects in the critical midrange frequency area (that in most cases is the highway to good mix translation).
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,378
Likes
1,561
Why master for who that does not care for sound quality at all ? Just master for best possible playback , the other users seems to listen to any kind of sound quality anyway regardless ? lesser playback systems need only to apply some device specific compression and EQ and we are done , I suspect your generic BT speaker does this already .

Another data point Ace of Base "The Sign" the worst song i know in every aspect , it also sounds terrible on everything (it's sickenly bad sound ) ! If this is what translation does to production values I don't want any of it ! It need to be completely different mixes of tracks not only EQ/Mastering if things like this is going to be saved .

it has exactly the kind of bass the everyone says they try to avoid but every pop song has anyway the generic "thump" that goes on instead of actual drum and bass instruments ? why does rock and pop have to sound like that ? mud anyone ?

No, that's not what translation does if we talk about mixing, that's just a bad audio production that got killed even further in the mastering process.

I must turn that off now and say I'm sorry to my audio system. :)
 

Zensō

Major Contributor
Joined
Mar 11, 2020
Messages
2,753
Likes
6,777
Location
California
It would be great to have these things implemented in audio, but I'm thinking it would be a big endeavour to get it all joined up to align a device somebody is using to automatically play the correct version of the track (or apply EQ to the original full range version to adapt it for the device). The device would have to communicate to the player what it's audio capabilities are, and then the player would either choose the correct version of the track (or apply optimal EQ to the full range version of the track). That would mean all devices would have to go through some kind of accreditation process for their audio capabilities so that it could communicate with accuracy to the player what it's real audio capabilities are. Software players or OS designers would then have to comply with that accreditation process. So it would have to be a joined up process between device manufacturers & software developers of players & OS through to the music creators themselves. It's a pretty extensive joined-up process, how easily can it be achieved and how do you get all parties to work together on it? I'd like to see it happen. (If something like that could be relatively easily achievable it's seems only a small extra step to agree/create an optimal standard (a named standard) for what studios & consumer gear should aim for in terms of an optimal full range frequency response - eg Anechoic Flat & maybe some other associated parameters.)
If I understand what you’re saying, you’re essentially describing what Dolby is trying to do with Atmos object-based mixing, though on a more rudimentary level at this point. Unfortunately, most Atmos downmixes (stereo, binaural) don’t sound as good as purpose-made stereo mixes, which is why Apple Music is requiring stereo masters be submitted along with Atmos masters.

 
Last edited:

Mnyb

Major Contributor
Forum Donor
Joined
Aug 14, 2019
Messages
2,880
Likes
4,050
Location
Sweden, Västerås
No, that's not what translation does if we talk about mixing, that's just a bad audio production that got killed even further in the mastering process.

I must turn that off now and say I'm sorry to my audio system. :)
I'm sorry did you actually listen to the track , my condolences :) to other we already done it no one else need to, goat76 took one for the team .
 

lowkeyoperations

Senior Member
Joined
Sep 13, 2021
Messages
303
Likes
296
I just listened to that Ace of Bass song on Spotify. And apparently over 240 miilion other people had already done that :p
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,079
Likes
6,959
Location
UK
If I understand what you’re saying, you’re essentially describing what Dolby is trying to do with Atmos object-based mixing, though on a more rudimentary level at this point. Unfortunately, most Atmos downmixes (stereo, binaural) don’t sound as good as purpose-made stereo mixes, which is why Apple Music is requiring stereo masters be submitted along with Atmos masters.

Well, I don't think that's really a useful comparison to make to attempt to explain what I was saying (because it doesn't explain or really relate to what I was saying), I think the content of my post was probably a better way to understand it, ha.....but I was just expanding on Amir's idea and thinking about how that would work in practice in the industry as a whole.
 

Zensō

Major Contributor
Joined
Mar 11, 2020
Messages
2,753
Likes
6,777
Location
California
It was this line that made me think of object-based mixing, which uses metadata to direct the player to arrange objects based upon the capabilities of the system.

The device would have to communicate to the player what it's audio capabilities are, and then the player would either choose the correct version of the track (or apply optimal EQ to the full range version of the track).
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,079
Likes
6,959
Location
UK
It was this line that made me think of object-based mixing, which uses metadata to direct the player to arrange objects based upon the capabilities of the system.
Correct, but only in-as-much that the capabilities of the system have to be communicated to the player in order for the "right version or right EQ" to be used that is compatible/relevant to the device, so yeah, but that's all......I don't think it really helps explain what I was saying, but yeah no problem of course.
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,378
Likes
1,561
Correct, but only in-as-much that the capabilities of the system have to be communicated to the player in order for the "right version or right EQ" to be used that is compatible/relevant to the device, so yeah, but that's all......I don't think it really helps explain what I was saying, but yeah no problem of course.

This strange idea that different mixes are needed for different reproduction devices must come to an end. :)
If the music mix already has a good translation, then that is all that is needed for the mix to sound the best possible on every device it is played on, at least as long as no outside factors are at play.

An outside factor can be that the devices don't have sufficient power to play music files with high dynamics. Then the solution is to look for a replacement for a more powerful device.

Another outside factor can be that the listening environment has too much noise for the most dynamic music to be heard in a satisfying way, like in a car, a noisy store, or similar noisy environments. In that case, the solution should be that the reproducing devices had a built-in compressor and/or a limiter to tame the dynamics of the music so that it can be heard in those noisy environments, in a similar way as some radio channels apply a different amount of compression depending on if it's daytime or nighttime. So yeah, that would be a welcome change.



The best-sounding mixes with good translations don't need any changes at all, they already sound the best possible on every device from a pure sound quality view, because they already have good translation.
You see, there's nothing wrong that all of the sound objects in the mix are sounding well-balanced also in the midrange frequency range, that is a correction that was needed for the mix to be a better mix, even if it wasn't as obvious in the full-range speakers as it was in the dynamically limited speakers. The nice and good-sounding bass in the full-range speaker was simply hiding that things were missing in the midrange area.

People here must start to understand that good translation goes both ways. The dynamically limited speaker that was used for the production was used as a tool to ensure that the mix had a good translation, which means that the mix will sound the best possible on everything it is played on.
 

Zensō

Major Contributor
Joined
Mar 11, 2020
Messages
2,753
Likes
6,777
Location
California
The dynamically limited speaker that was used for the production was used as a tool to ensure that the mix had a good translation, which means that the mix will sound the best possible on everything it is played on.
There's no way to ever know if a particular mix will sound the best possible on every device it's played on. How would one determine this? This is why software sims are so useful; while not perfect, they make possible testing on a much wider array of potential devices than hardware ever could.
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,378
Likes
1,561
There's no way to ever know if a particular mix will sound the best possible on every device it's played on. How would one determine this? This is why software sims are so useful; while not perfect, they make possible testing on a much wider array of potential devices than hardware ever could.

Do we really need to consider the most wacky one-of-a-kind speakers in this discussion?
If some extreme cases of speakers need extreme altering to the sound mixes for them to get into the ballpark of sounding like most other speakers, I'm happily leaving them out of consideration.
 

dfuller

Major Contributor
Joined
Apr 26, 2020
Messages
3,444
Likes
5,326
Why master for who that does not care for sound quality at all ?
Nobody's saying that you should mix for them specifically, just that they shouldn't be unnecessarily shortchanged.
 

RobL

Addicted to Fun and Learning
Joined
Mar 4, 2021
Messages
948
Likes
1,618
Ok, I’m a little confused about this discussion. Can someone explain to me why a great sounding mix made on good speakers (good in ASR terms) needs to be checked and/or optimized to sound good on ****** speakers? Shouldn’t something optimized to sound it’s best on good speakers, sound fine on everything else?
If not, doesn’t that mean we have to end up with a compromised product?
Admitted “mixing ignoramus” here, so go easy on me.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,849
Likes
243,393
Location
Seattle Area
I am not sure why you keep blaming the recording industry for the fact that there are no standards on speakers. Can you make it clear just one time so we understand what you are talking about. Do you want standards on content or do you want standards on playback devices? Once you answer this question clearly maybe we can have a constructive discussion.
You establish a standards and then hardware companies build products to comply. THX created such and companies followed. Such a standard would by definition include the technology companies anyway so it is part and parcel of the same thing. But the start has to be content producers, labels, etc. If they don't come, there is no point in creating a standard.
 
Top Bottom