this has never troubled @amirm ...I always wanted a oscilloscope to play with, absolutely no idea how to use one, but it's such a KOOL toy!
this has never troubled @amirm ...I always wanted a oscilloscope to play with, absolutely no idea how to use one, but it's such a KOOL toy!
Thanks Tom for carrying the torch. I'll add only one point about the "loudness war" issue. That isn't about normalizing music so the loudest peak hits at 0 dBFS (or close to it). Rather, it's about using compression and other processes to make the average level louder. So soft parts are raised in volume making everything the same, which makes listening tedious and tiring.
Could you post any examples of glitches and distortion? I just saw a big table of numerical results.A useful source of information I wasn't aware of until a couple of days ago was a Gearslutz thread about evaluating DA/AD loops using DiffMaker, https://www.gearslutz.com/board/gea...ing-ad-da-loops-means-audio-diffmaker-50.html. The latter is not really useful because of its limitations, but all the uploads of data of conversion captures are. I've looked at a couple, and clear glitching and distortion is present in some; conversely, the "good ones" are very impressive, in their level of true accuracy - purely via visual inspection.
So, "pro" gear is certainly not transparent, by default - there are levels of competence, and the better ones should be audibly more invisible. I aim to try and make sense of what's there - see if there's a pattern , and what may be learned in terms of sound signatures correlating to actual flaws in the working of the circuits.
Will do. I'm also "adjusting" to the new version of Audacity, which had a couple of peculiarities that interfered with the way I do things - so give me a day or so.Could you post any examples of glitches and distortion? I just saw a big table of numerical results.
How does this test cope with small benign(?) phase shifts and attenuation at the bottom end that may be caused by AC coupling? Or things going on at the outer reaches of the top end filtering? (minimum phase versus linear phase?).
Whether or not harmonic distortion is audible is based on several things. First, the perception of distortion is highly dependent on the frequency of the fundamental. Humans are virtually deaf to harmonic distortion in the bass frequencies, for example, but highly sensitive to it as frequency goes up. Second, higher harmonics are more audible than lower ones due to masking by the fundamental frequency. The lower harmonics have to be much stronger before they are perceived than higher ones. Third, another aspect of masking is temporal. There is a pre- and post- period where harmonics are not audible, and this varies with frequency and intensity as I recall.
So, you cannot state with certainty whether or not a particular distortion is audible without considering these other factors. For example, an amplifier that produces substantial 2nd, 3rd and 4th harmonics, in descending intensity, and no higher harmonics, will sound like it has no distortion if those harmonics are below a certain threshold (the exact dB value escapes me now but I think the 2nd generally only needs to be down 60-70dB in order to be inaudible - but I could be way off about that).
As a musician, listening for difference tones is a way to stay in tune, so I may be more sensitive to them.
Therefore you highlight a major problem with 'scientific' listening tests. Especially the "preference" based ones.
And was the dizi used during the development of lossy codecs? And a dizi ensemble? And a dizi-guitar-drums combo? And a dizi-ocarina duet? etc. With Western listeners? And Asian? etc. etc. If listening tests really are key to the development of lossy codecs, then this is the minefield the developers are entering. If, on the other hand, these codecs are based on something more fundamental than listen-tweak-listen-tweak-, etc., then listening tests are just confirmation that they more-or-less work.
But the listening test itself is a mechanical/programmed metric; the selections of extracts of music to be played, etc. are pre-programmed.Mechanical/programmed metrics really don't mean much, INCLUDING those that claim to be good at measuring audio quality.
If an instrument plays a tone, it will typically include a series of overtones (distortion products of a sine), some may even be louder than the fundamental (> 100% "distortion").
The job of the playback system becomes one of accurately (or pleasingly for the subjectivists) reproducing those distortions without adding any of its own.
But the listening test itself is a mechanical/programmed metric; the selections of extracts of music to be played, etc. are pre-programmed.
The aim of this scheme is not much different from self-driving car development. You can test until you're blue in the face but you'll never cover more than the tiniest fraction of possible 'patterns' in the information space. If you test in the USA, can you be sure that it will work sensibly in India, say?
It's 100 years later, and we are still finding that a listening-test free approach is the best way to make an audio system e.g. D&D 8C speakers are not 'voiced'; they meet a simple, objective specification.
JJ, I'm a bit at a loss there, could you explain the 2/8s statement?At best with stereo, you can reproduce 2/8ths of the actual soundfield at your two ears, even if you forget that heads move around.
JJ, I'm a bit at a loss there, could you explain the 2/8s statement?
TIA
Sorry but I'm still lost, what are the variables you speak of?The soundfield at any point in a room has 4 variables. If you have to capture 2 points in a room, that means 8 variables.