• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Topping D30Pro Review (Balanced DAC)

ceausuc

Active Member
Joined
Jul 20, 2018
Messages
156
Likes
113
Do any of these high performing DACS color the sound or have an impact on sound that is not captured by objective measurements? Would things like soundstaging, instrument separation, imaging, dynamics be noticeably different?

Yes, the measurements here give no indication on "soundstaging, instrument separation, imaging, dynamics ".
You can expect differences between some dacs with similar measurements.
Only you can decide how big/important these are (for you) - if your system is capable of revealing them.
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
Yes, the measurements here give no indication on "soundstaging, instrument separation, imaging, dynamics ".
You can expect differences between some dacs with similar measurements.
Only you can decide how big/important these are (for you) - if your system is capable of revealing them.

but what is it that is creating these differences? What measurements are missing that would equal separation etc? I was once told that measurements would need to be taken using actual music rather than test signals to give the full picture but that this is simply not possible yet?
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
but what is it that is creating these differences? What measurements are missing that would equal separation etc? I was once told that measurements would need to be taken using actual music rather than test signals to give the full picture but that this is simply not possible yet?


Psychology, time of day, hunger levels, mood, and who knows what other multitude of things that affect the human brain at any given moment.

There's a reason you don't see a "soundstage" measurement in the specification sheet of a DAC chip manufacturer. It's ridiculous even conceptually. You at best, may get something of the sort from headphone marketing departments, or perhaps crosstalk figures showing levels of stereo seperation within a DAC. But that's it..

Like imagine what "maximum soundstage" would even mean? It literally makes no sense. If I say maximum distortion, that's simply 100% THD. But what would a soundstage metric even look like conceptually? What would it do to something like a mono recording?

Soundstage itself can be best described in the realm of signal processing, and pinna activation to some degree. So things like echo,reverb, channel panning, and naturally the type of recording (binaural in a church, versus mono recording in a soundbooth recording studio). But aside from these few things, any other sort of quantification doesn't make logical sense.

If it weren't these aforementioned things, then DAC chips would be able to create "separation" in mono-recordings. And if that sound ridiculous to you - it's for good reason. How would a mono output gain more "soundstage/seperation/imaging/holographic-ness/dynamics" if the goal of a DAC chip by definition is just a conversion. It's not conversion + DSP, it's simply digital to analogue conversion.

Likewise with amps. It's just is simply to amplify by definition. If it does anything else, it's in violation of definitional intent, and loses fidelity. So while some may enjoy more distortion or whatever from a certain amp - that's not what an amp's function definitionally even is. People have this idea that when amps first started, the makers "knew what they were doing" by having so much distortion coming from tube amps. None of them would opt for distortion if they could - it was a byproduct of the limits of their knowledge in design and material sciences. And is why you're seeing newer devices strive to minimize these effects. DSP can do the fidelity destruction if you need it to. Why anyone would want to design a device with disotrtion as much as possible as a DAC for instance, is beyond me.
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
There is definite soundstage differences between headphones and speakers. For instance things sound wider on my AKG headphones than my Audio Technica’s. Both are open back. But DAC’s I’m skeptical for sure. You can hear more separation between instruments and reverbs being longer and more defined etc, but I put this down to less distortion and noise floor.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
There is definite soundstage differences between headphones and speakers. For instance things sound wider on my AKG headphones than my Audio Technica’s. Both are open back. But DAC’s I’m skeptical for sure. You can hear more separation between instruments and reverbs being longer and more defined etc, but I put this down to less distortion and noise floor.

But that's not soundstage, that's simply channel separation. So unless you want to say channel separation is soundstage, then I suppose that's not what I assumed one talks about.

And when you say "less distortion and noise floor" increases soundstage. Well that's fine, but that then makes soundstage entailed by other concepts, but in tandem amount to a newly combined term like "soundstage". But if that's the case, then zero distortion and noise = maximum soundstage? So if I am playing a recording doing mono, or playing a recording done in a church binuarally (or lets say 7.1 surround sound). Both of them would have identical and maximum soundstage so as long as they both had something like zero noise and distortion hypothetically?

See why it's problematic speaking about soundstage in this way?
 

Rottmannash

Major Contributor
Forum Donor
Joined
Nov 11, 2020
Messages
2,981
Likes
2,624
Location
Nashville
The SU-9 also has MQA, which I think will slowly find its way through the system.

I used to think I could avoid MQA by switching to Qobuz. It turns out Qobuz has MQA albums too. This means the infiltration can happen stealthily without official platform support.
I've never seen MQA on Qobuz.
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
But that's not soundstage, that's simply channel separation. So unless you want to say channel separation is soundstage, then I suppose that's not what I assumed one talks about.

And when you say "less distortion and noise floor" increases soundstage. Well that's fine, but that then makes soundstage entailed by other concepts, but in tandem amount to a newly combined term like "soundstage". But if that's the case, then zero distortion and noise = maximum soundstage? So if I am playing a recording doing mono, or playing a recording done in a church binuarally (or lets say 7.1 surround sound). Both of them would have identical and maximum soundstage so as long as they both had something like zero noise and distortion hypothetically?

See why it's problematic speaking about soundstage in this way?

Actually no. I was referring to soundstage simply as width of a stereo recording (or binaural or whatever). When I make my music, I place things across the stereo field, they may be stereo sources or mono sources of course. My AKG headphones make that panning seem more pronounced and wider than my AT's. I prefer mixing in the AT's as it forces me to be a little more bold with the panning which translates better on speakers. I have no idea if the DAC plays any part in that, I think not.

The lower noise floor and lower distortion of say my Atom headphone amp allowed me hear more separation between instruments and reverb tails can be heard for longer. This is not what I call soundstage, but the other person was talking about a DAC giving more separation. I think anything with less distortion will aid in this, DAC included. I realise it seemed like I was referring to soundstage with that remark, but I wasn't.

I class soundstage simply as an "imaginary stage" where the band would be (of course you can have movement around that sound stage of sounds that a an actual band could not do.) Thus sound-field could be another term. So a mono track would have a static soundstage/sound-field.
 

curiouspeter

Addicted to Fun and Learning
Joined
Dec 30, 2020
Messages
623
Likes
396
Location
San Francisco Bay Area
I've never seen MQA on Qobuz.
Look for this album:

The Nordic Sound: 2L Audiophile Reference Recordings

Roon (or your DAC) will tell you what MQA stuff it is doing.

A lot of releases from 2L are also MQA on Qobuz. Both streamed or downloaded. 2L even talked about this on their Facebook page.

Screen Shot 2021-03-02 at 9.08.47 AM.png
 
Last edited:

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
Actually no. I was referring to soundstage simply as width of a stereo recording (or binaural or whatever). When I make my music, I place things across the stereo field, they may be stereo sources or mono sources of course. My AKG headphones make that panning seem more pronounced and wider than my AT's. I prefer mixing in the AT's as it forces me to be a little more bold with the panning which translates better on speakers. I have no idea if the DAC plays any part in that, I think not.

The lower noise floor and lower distortion of say my Atom headphone amp allowed me hear more separation between instruments and reverb tails can be heard for longer. This is not what I call soundstage, but the other person was talking about a DAC giving more separation. I think anything with less distortion will aid in this, DAC included. I realise it seemed like I was referring to soundstage with that remark, but I wasn't.

I class soundstage simply as an "imaginary stage" where the band would be (of course you can have movement around that sound stage of sounds that a an actual band could not do.) Thus sound-field could be another term. So a mono track would have a static soundstage/sound-field.

Fair enough. But just to be sure of something else on my mind, none of this has anything to do with frequency response correct? Like if I chopped out everything after 8kHz for example with a low pass filter and everything under 200Hz with a high pass filter, under you term of soundstage, there would be no effect correct?

Another thing, if I took a mono recording, and added reverb, that also would have no effect on soundstage as you define it?

I just want to find out what singular aspect it is we're talking about, (that we can change around) that will effect soundstage more than any other aspect. When you talk about X headphone has more soundstage than Y headphone. Nothing is constant, other than the fact that they're headphones, and at best, the same volume.

If it's simply the stereo width, this can be DSP'd in one direction, while something like crossfeed would bring the stereo recording more closer to a mono recording for example.

I know you say mono is a "static soundstage" (you also say sound-field, and that's a whole new term I have no idea how to reconcile just yet). But I don't know what a "dynamic soundstage" could even be. I thought soundstage was a metric that tracks on a level of sorts (increasing or decreasing stereo width for example). If that is the case, I'm not sure what "static" means in relation to the "amount" of soundstage present. Like what would audio without ANY soundstage even be like?

How about one last question since I know this can be annoying being probed for clarity like this. If I have you AT headphones, and I am presented with two recordings. The first recording is in a church, and the second is the same music but in soundbooth. By your definition of soundstage, are they both "the same soundstage"? Since the stereo width of your headphones isn't really changing, we're not doing any DSP or anything to influence this stereo width property.
 

Derekinla

Member
Joined
Aug 11, 2020
Messages
11
Likes
4
Appreciate the discussion. I asked the previous question as I find it interesting to hear YouTube reviews from audiophiles who describe audible differences / coloration of sound between different high performing DAC's but I'm getting the impression that this may not be grounded on any measurable / objective findings? If you were to do a blinded comparison of DACS, would the average audiophile be able to discern a difference between a $100 vs $300 vs a $600 vs $1000+ DAC?
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
If you used a HPF and LPF, the track itself would probably sound narrower (less wide) because engineers use EQ to spread things around the stereo image, which is what I refer to has soundstage, although EQ can also make sounds seem higher and lower too, not just width. So say you have two instruments panned to close proximity to one-another, you may add some 8khz to one of those to separate it a little bit more. Cut this with a LPF and the separation will be lessened and the ears ability to discern the position as well in that stereo field.

Adding stereo reverb to a mono sound and creating a stereo track will effect the stereo image and thus soundstage (as I consider them the same or closely linked at least). If the track remains mono, then no, the soundstage will be static as there is no stereo field.

Not sure wha to say about the question on the headphones. all i can say is that the AKG's sound like the sound is wider, like the speakers in a room being 10 feet apart and the AT's sound like the speakers are 6 feet apart. I know that's not the best description but hopefully makes sense.

I consider a mono signal to be static as It cannot move anywhere, it can only get louder or quieter. Using reverb can make something feel further away, but it would still feel like the sound is in front of you, not to the sides etc. Imagine a band placed in from of you playing acoustically. Drummer in the centre, guitar on the right with a sax a few feet further right and piano on the left with percussionist a few feet further left. A good soundstage maintains that image. A mono signal could not convey that at all they would all be in the centre.

Not sure what you mean by the last question. Do you mean a recording is made in two different locations, a church and a sound booth? Or the same recording is listened to in both locations on the same headphones?

If you record a band either in a booth or a church, the soundstage of each recording will be different.
 

Veri

Master Contributor
Joined
Feb 6, 2018
Messages
9,597
Likes
12,039
Appreciate the discussion. I asked the previous question as I find it interesting to hear YouTube reviews from audiophiles who describe audible differences / coloration of sound between different high performing DAC's but I'm getting the impression that this may not be grounded on any measurable / objective findings?
Correct :)

If you were to do a blinded comparison of DACS, would the average audiophile be able to discern a difference between a $100 vs $300 vs a $600 vs $1000+ DAC?
It would be an interesting experiment. My money would be on a "No";
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
If you used a HPF and LPF, the track itself would probably sound narrower (less wide) because engineers use EQ to spread things around the stereo image, which is what I refer to has soundstage, although EQ can also make sounds seem higher and lower too, not just width. So say you have two instruments panned to close proximity to one-another, you may add some 8khz to one of those to separate it a little bit more. Cut this with a LPF and the separation will be lessened and the ears ability to discern the position as well in that stereo field.

Adding stereo reverb to a mono sound and creating a stereo track will effect the stereo image and thus soundstage (as I consider them the same or closely linked at least). If the track remains mono, then no, the soundstage will be static as there is no stereo field.

Not sure wha to say about the question on the headphones. all i can say is that the AKG's sound like the sound is wider, like the speakers in a room being 10 feet apart and the AT's sound like the speakers are 6 feet apart. I know that's not the best description but hopefully makes sense.

I consider a mono signal to be static as It cannot move anywhere, it can only get louder or quieter. Using reverb can make something feel further away, but it would still feel like the sound is in front of you, not to the sides etc. Imagine a band placed in from of you playing acoustically. Drummer in the centre, guitar on the right with a sax a few feet further right and piano on the left with percussionist a few feet further left. A good soundstage maintains that image. A mono signal could not convey that at all they would all be in the centre.

Not sure what you mean by the last question. Do you mean a recording is made in two different locations, a church and a sound booth? Or the same recording is listened to in both locations on the same headphones?

If you record a band either in a booth or a church, the soundstage of each recording will be different.

Now I'm more confused. So frequency response also has something to do with soundstage on your view? But I thought you said it was stereo separation? And I agreed with that, when you said headphones (because you're listening to music mastered for speakers, it would sound wider if you didn't apply crossfeed). Okay, how about this. Same track, HPF at 500Hz and LPF at 7kHz VERSUS HPF at 1000Hz and LPF at 19kHz. Which one under you definition has more "soundstage"?

The next thing I don't understand, is this "Higher and Lower" thing. Stereo effects only occur on a horizontal plane when recording (unless you're recording with one mic higher than the other, which noone does). But there is no way to tell if music is coming from a higher or lower source, simply with a stereo recording (without visual aids). This actually also happens with real life sounds (ask someone to go behind your back and play a sounds behind your back with your eyes closed, you're not going to be able to tell if the sound is coming from above your head, or lower than your head). It's one of the downsides of having ears horizontally matched. Owls can actually hear sounds up and down since their ears are displaced. We mostly use visual and other clues (like time delay) to aproximate where the sound is. So when you say messing with EQ is making the sound go up and down from a positional standpoint, that's not really possible from playback without other auditory clues.

As for mono + reverb not contributing to soundstage. I assumed soundstage was more than simply the distance of a percieved sound left and right of your ear. I would have thought also distance in a cone even directly away from your face contributes to this. You bring up an example about live players stacked in front of one another, not being able to maintain imaging. I agree, but now you've implicated imaging into this whole ordeal, which I assumed (and also like before pleaded, that you remove soundstage from as many established concepts). This only makes understanding it more complex.

The last question that you didn't understand: I mean't recording made in two different locations, and both recordings listened to on the same headphone in the same home for instance.

You replied with a band playing in a church versus a booth, the soundstage would be different. But that doesn't make sense with the original claim you made (you said soundstage is the stereo separation, and something like headphones inherently provide it more than speakers, which I agreed). But then how is it possible two identical recordings, with members in the same position, suddenly change soundstage, if no other factors changed?

With the understanding we had about mono recordings.. How are my conclusions about soundstage not exactly the same as yours? I said originally, soundstage can be best summed up with the combination of recording type (binural, mono, etc..) and quality obviously. Along with pinna activation levels. But most importantly post processing effects like echo, reverb, and channel panning..

You're slowly describing everything I initially said? Aside from the unintuitiveness about how echo and reverb don't add soundstage because it simply move the recording further away? Or the weird "lower or higher" position by simply changing frequency response (though I argue, up-and-down positioning can't be discerned unless you have one ear higher than the other on a horizontal plane of listening).

Okay, so, do we agree at least on this: That DAC's and AMP's definitionally have no appreciable relation to soundstage? And headphones can have an effect as they present music in a different manner than was originally mastered (more stereo seperated due to isolation from left and right channel of audio output which speakers can't do). But more importantly, most soundstage differences people describe are achieved due to recording type, and also recording setting - along with post processing techniques?
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
Now I'm more confused. So frequency response also has something to do with soundstage on your view? But I thought you said it was stereo separation? And I agreed with that, when you said headphones (because you're listening to music mastered for speakers, it would sound wider if you didn't apply crossfeed). Okay, how about this. Same track, HPF at 500Hz and LPF at 7kHz VERSUS HPF at 1000Hz and LPF at 19kHz. Which one under you definition has more "soundstage"?

The next thing I don't understand, is this "Higher and Lower" thing. Stereo effects only occur on a horizontal plane when recording (unless you're recording with one mic higher than the other, which noone does). But there is no way to tell if music is coming from a higher or lower source, simply with a stereo recording (without visual aids). This actually also happens with real life sounds (ask someone to go behind your back and play a sounds behind your back with your eyes closed, you're not going to be able to tell if the sound is coming from above your head, or lower than your head). It's one of the downsides of having ears horizontally matched. Owls can actually hear sounds up and down since their ears are displaced. We mostly use visual and other clues (like time delay) to aproximate where the sound is. So when you say messing with EQ is making the sound go up and down from a positional standpoint, that's not really possible from playback without other auditory clues.

As for mono + reverb not contributing to soundstage. I assumed soundstage was more than simply the distance of a percieved sound left and right of your ear. I would have thought also distance in a cone even directly away from your face contributes to this. You bring up an example about live players stacked in front of one another, not being able to maintain imaging. I agree, but now you've implicated imaging into this whole ordeal, which I assumed (and also like before pleaded, that you remove soundstage from as many established concepts). This only makes understanding it more complex.

The last question that you didn't understand: I mean't recording made in two different locations, and both recordings listened to on the same headphone in the same home for instance.

You replied with a band playing in a church versus a booth, the soundstage would be different. But that doesn't make sense with the original claim you made (you said soundstage is the stereo separation, and something like headphones inherently provide it more than speakers, which I agreed). But then how is it possible two identical recordings, with members in the same position, suddenly change soundstage, if no other factors changed?

With the understanding we had about mono recordings.. How are my conclusions about soundstage not exactly the same as yours? I said originally, soundstage can be best summed up with the combination of recording type (binural, mono, etc..) and quality obviously. Along with pinna activation levels. But most importantly post processing effects like echo, reverb, and channel panning..

You're slowly describing everything I initially said? Aside from the unintuitiveness about how echo and reverb don't add soundstage because it simply move the recording further away? Or the weird "lower or higher" position by simply changing frequency response (though I argue, up-and-down positioning can't be discerned unless you have one ear higher than the other on a horizontal plane of listening).

Okay, so, do we agree at least on this: That DAC's and AMP's definitionally have no appreciable relation to soundstage? And headphones can have an effect as they present music in a different manner than was originally mastered (more stereo seperated due to isolation from left and right channel of audio output which speakers can't do). But more importantly, most soundstage differences people describe are achieved due to recording type, and also recording setting - along with post processing techniques?


haha. I think we are confusing each other! lol

Look, if you are using EQ to help position sounds in a mix in the stereo field, then applying a filter at playback COULD remove some of those EQ cues you are using to place those sounds, so it's position in the stereo field will not be as obvious. This does not alter the capabilities of the equipment to depict an accurate soundstage, just that applying any EQ could effect the effectiveness of the EQ decisions when mixing to present a certain soundstage.

Different frequencies can appear to be higher or lower. This is how Dolby Atmos works, exploiting that. When you get into mixing tutorials, engineers talk about how to process sounds to give them height etc.

If you record the same band in two venues, then the recording will change massively, in a church there will be reflections (reverb) clouding the positions of the players. In a sound booth the recording will be dry, so the position of the players will be much more obvious.

My initial assertion was that different headphones present a wider stereo image to others. I think less distortion in any component will improve the ears ability to pick out the instruments more clearly and thus hear the positions of the instruments more acutely, so that could be described as a more accurate soundstage, but DAC's probably reached a level better than out ears need with regards to distortion a long time ago so probably don't contribute to a better or worse soundstage.
 

dogtagkz

Member
Joined
Jun 19, 2020
Messages
26
Likes
16
Newbie here, a balanced dac would require a balanced amp?
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
haha. I think we are confusing each other! lol
Definitely did confuse me, but lets see what we got here.

Look, if you are using EQ to help position sounds in a mix in the stereo field, then applying a filter at playback COULD remove some of those EQ cues you are using to place those sounds, so it's position in the stereo field will not be as obvious. This does not alter the capabilities of the equipment to depict an accurate soundstage, just that applying any EQ could effect the effectiveness of the EQ decisions when mixing to present a certain soundstage.

Yeah, I'm not sure how this is informative.

Different frequencies can appear to be higher or lower. This is how Dolby Atmos works, exploiting that. When you get into mixing tutorials, engineers talk about how to process sounds to give them height etc.

Dolby puts out nonsense all the time, I don't see how it's even possible to have "height" in stereo, that simply doesn't make any sense even speaking from principal (I told you, run the test yourself in reality, have someone play something behind your back, with eyes closed, and see if you can tell if the sound is higher or lower than your head, keeping the sound source centered).

If you record the same band in two venues, then the recording will change massively, in a church there will be reflections (reverb) clouding the positions of the players. In a sound booth the recording will be dry, so the position of the players will be much more obvious.

But that doesn't answer the question? I specifically controlled for aspects like imaging and how it's distinct from soundstage according to your first formulation on what you took soundstage to be (which I hope you address).

My initial assertion was that different headphones present a wider stereo image to others. I think less distortion in any component will improve the ears ability to pick out the instruments more clearly and thus hear the positions of the instruments more acutely, so that could be described as a more accurate soundstage, but DAC's probably reached a level better than out ears need with regards to distortion a long time ago so probably don't contribute to a better or worse soundstage.

I don't take "stereo width" to be a primary component of soundstage, otherwise mono recordings coming from speakers for example would be as flat as you imagine they would be on headphones, but yet, that's simply not the case in reality, otherwise all mono recordings would be virtually the same soundstage.

The reason stereo width isn't a primary component, because it only deals with lateral distance (how far apart the channels are between left and right ear presentation), and not like Z axis depth (distance to and from the listener), and also because stereo width is already an established term, so it doesn't make sense to be synonymous with soundstage. And also because stereo width falls under a post-processing effect that can be increased or decreased in software.

I say soundstage is a psychological phenomena mainly. And any basis in reality, is due to recording type/setting, post-processing/DSP, and pinna activation levels.

Okay, aside from the height aspect, and the blurring of lines you're presenting between "soundstage" and "imaging" and "stereo width" but somehow not making comment on "distance" (like the church vs booth where reflections are widely different).

I just need to know, what portions of my summation of soundstage do you disagree with? Or more importantly, what the argument is against my summation? The reason I say "headphones stereo presentation" isn't soundstage, simply because I can throw a mono recording at you, and you would have to then say all mono recordings have identical soundstage. And when you say "EQ", that's simply an agreement with what I said about post processing which envelops EQ already.
 
Top Bottom