• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Can measurements see what we hear?

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
I was asked elsewhere if you can see individual orchestral instruments in a measurement. I appreciate that many of you may have seen this sort of stuff before, but there are also many out there who havent, including the person that asked the question. He thought it was not possible so I posted the Musiscope recording shown below to demonstrate that we very much can measure individual instruments, their harmonics and interactions.

 

Erik

Active Member
Joined
Jul 1, 2018
Messages
137
Likes
271
But I can't see individual instruments, I see them all mixed together. I mean if individual instruments had individual colors on spectrum analyzer then it would work, but now I can see only individual frequencies, not individual instruments. If we can see an individual instrument, then it should be possible to easily extract it from the recording. Is it possible?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,600
Location
Seattle Area
Harman demonstrated prototype technology that could separate streams in music. It was uncanny how it could do that with vocals for example. You could then place the instruments at will in different speakers in a multi-channel system. It was called QuantumLogic.


Sadly it was never shipped outside of the audio system in Lexus LFA.
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,636
Location
Canada
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
But I can't see individual instruments, I see them all mixed together. I mean if individual instruments had individual colors on spectrum analyzer then it would work, but now I can see only individual frequencies, not individual instruments. If we can see an individual instrument, then it should be possible to easily extract it from the recording. Is it possible?
Well thats down to interpretation, I can plainly see the signals the different instruments generate when they play.

If you know the precise harmonic structure of the instrument at all volume levels then yes you could extract it (not saying that is easy - in fact it would be mind bogglingly complex). Also, if a signal is obscured sufficiently by another (by amplitude and frequency proximity), you equally wont hear it either. This is how and why MP3 works.

The point is that there is a view that we can hear things we cant measure. This isnt true.
 
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
For sure we can measure what we hear. But how we perceive what we hear is another thing. :)
There are many that think that is not the case.

How do we perceive? That implies that we all perceive things fundamentally differently, yet when under controlled conditions we get to a situation very quickly where people cant hear the differences they claim in uncontrolled conditions. Also scientific work such as Tooles et al demonstrate that people clearly and predictably prefer a similar sound!
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
. Also scientific work such as Tooles et al demonstrate that people clearly and predictably prefer a similar sound!

I'm really not an expert but IMHO I think the situation with sound is similar to some other things, like fashion clothing or cars, houses etc - most people could reach and agreement that something is nice but personal taste always comes to play as well.
 
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
I'm really not an expert but IMHO I think the situation with sound is similar to some other things, like fashion clothing or cars, houses etc - most people could reach and agreement that something is nice but personal taste always comes to play as well.
...which invariably is influence by so many factors other than the actual sound ;) However yes you are correct, some people like certain sound, maybe bass heavy as an example. Term the question differently however, which do you think is the most balanced and accurate sound? ;)
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
...which invariably is influence by so many factors other than the actual sound ;)

So true. :)

Term the question differently however, which do you think is the most balanced and accurate sound? ;)

When I was doing room EQ this curve worked best for me, I still like how it sounds (both speakers response shown with 1/12 smoothing):



So basically a 10dB sloped down curve within the frequency range what my speakers can deliver (33Hz - 18kHz).

I'm not sure though if I got your question well.
 
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
From memory that is the in room measured curve that Toole found was generally preferred in the research :)

This should be flat in an anechoic measurement.

Btw It's also my preference :)

My question was alluding to the fact that whilst some may like, for example a very bass heavy sound, they may also be aware its not in any way balanced or accurate.
 
Last edited:

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,448
Likes
4,812
Yes and no.

Yes, we can always measure. Is that what we hear? Not necessarily.

Neurological: we don't know how we listen. While we are at point we can recognize with decent accuracy what people listen to through fMRI (https://www.the-scientist.com/the-s...als-which-songs-people-are-listening-to-30321) we still don't understand how our brain processes sound. My inner violin does not match someone else's violin. Some progress is being made, some of it very very cool (https://www.ncbi.nlm.nih.gov/pubmed/30696881) but that remains very far from a deep understanding. The fact that we all hear differently is well established (for example https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4757893/). A lot of what we hear is actually not heard but reconstructed by our brain from a sparse signal: this is why we can recognize a violin from an amazingly poor recording and reproduction device or why we are so good at isolating human voices in a sea of noise. Anecdotally, I spent some time listening to vocal performances with a pro opera singer and teacher last year: while the signal that reached our hears was definitely the same, our perception of it could not have been more different. Where I perceived some guys singing together, she perceived and classified five persons, each with their own characteristics.

One important thing to note is that this extremely high level of subjectivity, which is often exploited by "golden ears" reviewers or marketing, does not prove that cables can make a difference :), it simply means that confronted with the same signal different brains hear different things.

As far as recovering individual instruments from a symphonic piece (or even much simpler ones), there are many obstacles in terms of

information theory: even if we could perfectly define identifying characteristics for each instrument, we quickly reach a point where the information simply isn't there in sufficient quantity in a recording either in real time or even in the totality of the recording. When a conductor isolates an issue with a musician/instrument in his orchestra he dynamically re-allocates his brain bandwidth first to the problematic area, then the musician. We can't do that on recorded music as it stands. That could, by the way, be an argument for extremely high data rate recordings in the future - listening to a symphonic orchestra and being able to zoom on specific parts would be very cool, in a computational photography kind of way.

computational complexity: even if we could perfectly define identifying characteristics, computing the interactions of instruments and the resulting sound waves may be impossible, asymptotic complexity, unsolved P=NP kind of stuff, etc... Note that it is the computing of the result of 100 instruments together that is in that class of difficulty, doing the reverse operation can be proven to be impossible (see information theory caveat above) in some cases and, even if it is doable (say a couple of instruments in a simple set up) it is still much more complex than the direct problem and we fall back in the computational complexity. Also worth noting is that when it can be achieved (at least perceptually) there is a whole lot of reconstruction in the background. If a violin and piano play together, are perfectly identified, there will still be overlapping information that can't be assigned to one or the other: a model is used to fill the holes, which is - at least from a result point of view - similar to how our brain works.

One of the fun aspects of the field is that currently researchers are trying tons of trendy "AI" algorithms to advance our understanding of how the brain works. Where one could naively expect something like "our brain is doing that let's try it in software" the reality is "hey, my clever algorithm delivers decent results, that could be how our brain works"...

One easily available document is "Automatic musical instrument recognition from polyphonic music audio signals"
(http://mtg.upf.edu/system/files/publications/ffuhrmann_PhDthesis.pdf) - a good resource with plenty of references.

Ultimately, while I am 100% in the camp of the people who say that a cable (or else) with no measurable impact on the signal can't change the signal that reaches our ears and therefore is snake oil they are over extending themselves by generalizing radically.

A system can be 100% deterministic but remain essentially non predictable.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,600
Location
Seattle Area

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,636
Location
Canada
Ah, I am behind times I guess. The original technology was going in a Lexicon processor that was killed in development. Good to see it come out in another manifestation.

I'm really curious how this compares for stereo upsampling to multi-channel vs the other technologies people seem to like for it like Auro3D. In terms of cost according to the internet it's ~$7000, which is not that much compared to some of the high-end processors out there like the JBL SDP-75 or other Trinnov stuff.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,696
Likes
37,432
Hey, that QLI 32 is only $200 per channel. Cheap considering what it does. Let me see, now add $200 more per channel for 32 channels (have to use small speakers wouldn't have room for 32 channels otherwise). So would one be better off with 32 JBL LSR 306 speakers and the QLI or with the M2 for stereo? Cost is about the same.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,600
Location
Seattle Area
I'm really curious how this compares for stereo upsampling to multi-channel vs the other technologies people seem to like for it like Auro3D.
It is a much more advanced technology. It analyzes the source content and attempts to losslessly extract the various objects in it. Upmixing technologies by contrast are much simpler relying on phase differential, what is common in channels, etc.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,600
Location
Seattle Area
Hey, that QLI 32 is only $200 per channel. Cheap considering what it does. Let me see, now add $200 more per channel for 32 channels (have to use small speakers wouldn't have room for 32 channels otherwise). So would one be better off with 32 JBL LSR 306 speakers and the QLI or with the M2 for stereo? Cost is about the same.
The M2 will play at hugely louder dynamics.
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,636
Location
Canada
Hey, that QLI 32 is only $200 per channel. Cheap considering what it does. Let me see, now add $200 more per channel for 32 channels (have to use small speakers wouldn't have room for 32 channels otherwise). So would one be better off with 32 JBL LSR 306 speakers and the QLI or with the M2 for stereo? Cost is about the same.

Looks like you would also need one of these breakout boxes and then 4 DB25 to XLR(or whatever) cables. It seems that 32 speakers are not required, but 7 front stage speakers(including 3 heights?) are required, including a certain number of surrounds, heights, and rears which sums to 17 if I'm reading the manual right.

So presumably you would want your front stage speakers to be a bit more powerful than that, at least. Since this whole thing is designed for cinemas it's not really ideal for home usage.

It is a much more advanced technology. It analyzes the source content and attempts to losslessly extract the various objects in it. Upmixing technologies by contrast are much simpler relying on phase differential, what is common in channels, etc.

It's really unfortunate that such an interesting and powerful algorithm is gated behind hardware with a singular special purpose that makes it less than ideal for in-home use, regardless of the price.
 
Top Bottom