• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Monoprice Monolith THX 887 Balance Headphone Amp: New Champ?

jruser

Active Member
Joined
Aug 19, 2019
Messages
114
Likes
62
Would the volume control differences between this and the 789 be due to sample variation? Or do we think this has a worse pot?
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,384
Location
Seattle Area
Would the volume control differences between this and the 789 be due to sample variation? Or do we think this has a worse pot?
My guess is the former.
 
Joined
Aug 25, 2019
Messages
94
Likes
202
You've got that wrong.
You can't hear the difference in everything that can be measured.
But you can measure the difference in everything you can hear.

Yeah I recognized people were going to interpret what I said the way you did, and respond in that way due to an ambiguity in what I wrote (my bad :). It is true that if you look at a particular measurement, you can demarcate within that measurement what can actually be heard by the human ear, and what can only be heard through a super sensitive machine. For example: looking at the master SINAD chart for DACs, the human ear might be able to hear the difference in SINAD (whatever that might actually mean to irl listening) between the Matrix Audio X-Sabre Pro and the NAD T758 AVR. But the human ear almost certainly cannot discern the difference between the SINAD of the Matrix Audio X-Sabre Pro and the Okto Research DAC8. However, the machine clearly can tell the difference between the Okto and the Matrix. This illustrates the point that you made, and I don't disagree with that.

In a different vein, my point was that most of the differences in sound that actually make a difference to people in irl listening do not have a specific measurement attached to them, and most likely will never be measurable. For instance, the dynamics, sound stage, and imaging of a particular headphone cannot have a measurement ascribed to them. In order to ever obtain what might look like a "measurement" of those things, you are getting into like hard AI territory, and at that point by nature any "measurements" that might be possible, will be colored by subjectivity. The biggest factor in a headphone based sound system that makes an actual audible difference during listening is the headphones themselves. You can't take a measurement or any group of measurements that will tell you whether the Focal Utopia is "better" than the Meze Empyrean, and make a chart out of it. Thus most of the differences between equipment that are actually discernible are immeasurable. Now that is not to say measurements are useless. Measurements are what keeps people from going out and spending 20K on a DAC, and thinking it sounds better than something like the RME-ADI2. The endeavor to measure the measurable stats of the most popular consumer amps and DACS is venerable and protective of the consumer. However, thinking that every fact about an audio system can be reduced to measurable facts is just flat out wrong, and I don't think there are too many people who believe in a pure reductionist model when talking about headphone systems (or any audio system).
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,821
Joined
Aug 25, 2019
Messages
94
Likes
202
It's an interesting article. Ignoring the primary purposes of the investigation and article, we can apply some of the concepts that relate. It elucidates some of the limitations of current human hearing algorithms. I think these algorithms will get better and better as software and AI tech advance. It also brings to light the fact that the physical structure of the human ear itself is something that is tough to model.
Here is some food for thought: even if we were to build a form of AI that replicates human hearing perfectly, and the machine has replicated human intelligence, and it is hard AI to the point where the machine actually experiences what it is like to hear a sound or an entire song- would we be able to use software to replicate whether the AI "enjoys" one song over another, or one pair of headphones over another--would we be able to measure this enjoyment? Maybe. We would definitely have access to a substantial amount of more physical facts about what it is that makes one mind's processing of a sound more enjoyable to them vs to another mind. I don't think we would have a complete description though. We would still be left with non-physical phenomena that are irreducible to physical facts. What the subjective experience is actually like when you listen to one pair of headphones vs another. You might be able to replicate heuristics that the mind uses to draw subjective conclusions. But that still isn't enough to actually have access to another person's 1st hand personal subjective experience. To bring things back to present reality, at this point in time we still only have elementary auditory processing human brain algorithms, and most of this tech still hasn't even been applied to mainstream audio analysis machines, which are limited to a fairly small number of measurements, relating to distortion and many other things that people probably don't actually take into account when they judge one piece of equipment vs. another.
 
Joined
Aug 25, 2019
Messages
94
Likes
202
an example: software has gotten really good at machine learning algorithms used for taking pictures on your smartphone. the phone will take 100 pictures and use its AI algorithm to pick which imagine the user will actually like the most. even still there is still going to be disagreement between users about which camera or images or ai algorithm yields the pictures they actually like the best. and that is only counting candid photos. never mind preferences for which abstract arty photo is better than another. we probably have the tech already to apply similar models to sound analysis, if there really was a want for it, and enough money to throw at it. but it seems to me there will still always be limitations to this sort of thing. although perhaps I am not optimistic enough about tech advancement
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,821
Here is some food for thought: even if we were to build a form of AI that replicates human hearing perfectly, and the machine has replicated human intelligence, and it is hard AI to the point where the machine actually experiences what it is like to hear a sound or an entire song- would we be able to use software to replicate whether the AI "enjoys" one song over another, or one pair of headphones over another--would we be able to measure this enjoyment? Maybe.

With all due respect I place AI at the top of my list of fake science. I was there near the beginning and was always skeptical of Minsky and his posse.
 
Joined
Aug 25, 2019
Messages
94
Likes
202
With all due respect I place AI at the top of my list of fake science. I was there near the beginning and was always skeptical of Minsky and his posse.

Sure, I don't have a problem with that belief. But that was meant to be a thought experiment. To say-hey suppose we actually could know every physical fact relating to sound perception, would we have all of the facts about sound perception or would there be information missing. Also to point out that we are nowhere close to being able to measure everything there is to know about sound perception.

Personally I have no idea whether or not hard ai is actually "scientifically possible" if that's what you are referring to. Or whether the AI we have is actually replicating human intelligence or if it is doing something else.
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,629
Likes
10,203
Location
North-East
With all due respect I place AI at the top of my list of fake science. I was there near the beginning and was always skeptical of Minsky and his posse.

Maybe because AI is not really science but a set of algorithms?

That article, though is a bit different. All it's saying is that linear transforms like Fourier can't exceed the 'uncertainty' of precision between time and frequency domains. In effect, it says that human hearing can't be using a linear process for sound detection and identification, assuming the experimental results are correct. Not a big surprise, as the non-linear nature of human hearing has been studied before and since.
 
Joined
Aug 25, 2019
Messages
94
Likes
202
Maybe because AI is not really science but a set of algorithms?

That article, though is a bit different. All it's saying is that linear transforms like Fourier can't exceed the 'uncertainty' of precision between time and frequency domains. In effect, it says that human hearing can't be using a linear process for sound detection and identification, assuming the experimental results are correct. Not a big surprise, as the non-linear nature of human hearing has been studied before and since.

Yeah the article is only loosely or probably even tangentially related. But it is still a good example of mathematical models not accurately replicating human hearing. AI is tech not science, but to be fair I don't think he was trying to relate his beliefs about AI to the article.
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,821
I'm a Searle guy myself.

Do you mean Campbell Searle? He taught me the principle of selective neglect, "you chose to neglect the optional midterm, too bad for you." I remember covering his office door with fliers for our screening of "I am Curious Blue" in the middle of the night.
 
Last edited:
Joined
Aug 25, 2019
Messages
94
Likes
202
"For instance, the dynamics, sound stage, and imaging of a particular headphone"

*measurements bro walks in* "let me stop you right there hoss"
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,629
Likes
10,203
Location
North-East
Yeah the article is only loosely or probably even tangentially related. But it is still a good example of mathematical models not accurately replicating human hearing. AI is tech not science, but to be fair I don't think he was trying to relate his beliefs about AI to the article.

The paper (rather than the article) is actually fairly detailed on the findings. All it's really saying is that the accuracy of timing detection between two signals, as perceived by a human being, is better than might be possible using a Fourier transform or another linear analysis. It doesn't say it's not measurable, and in fact, says that non-linear methods must be used to better simulate human hearing in mathematical models.
 
  • Like
Reactions: THW
Joined
Aug 25, 2019
Messages
94
Likes
202
The paper (rather than the article) is actually fairly detailed on the findings. All it's really saying is that the accuracy of timing detection between two signals, as perceived by a human being, is better than might be possible using a Fourier transform or another linear analysis. It doesn't say it's not measurable, and in fact, says that non-linear methods must be used to better simulate human hearing in mathematical models.

Oh okay. That's good to know. Can you show us the non-linear method?
 
Joined
Aug 25, 2019
Messages
94
Likes
202
Boring. I want a model that can tell me whether I should buy some focal utopias or Meze empyreans. Anything else is TMI :)
 
Top Bottom