I'm of the view it's a combination of measurements, one's physiology, gear and environment. And that measurements vs perceivability is an entire subject that could be separated until it is fully understood
A lot of this has already been researched. Perception is a bitch. It changes depending on many factors.
1. I'm fairly certain I can hear a difference between lossy and lossless, ripped from the same source of course (but only some songs)
- I'd think that since most if not all lossy codecs employ some psychoacoustic modelling and considerations, it could depend on the recording, ie. If a track has more content that has been deemed to 'stay' or 'removed' by the modelling
Depends on the used codec, bitrate and quality settings and is very dependent on training and recordings themselves.
One can't generalize lossy codecs and circumstances.
2. I'm thoroughly annoyed when it's summer here (35C+) and i just feel all music sounds trash in that kind of environment
- Yes i know temperature vs air density etc. I'm not talking about the measurement, i just, don't enjoy any music in those kind of heats, no matter how good the system or recording is, even if the entire system is re-tuned to create a replica of what it would do at 21C... just nah.
The mood changes so perception changes.
3. Ear structure?
- Pushing the top of the Helix forwards from behind yeilds a stronger midrange
- Pushing the Pinna forwards from behind yeilds a stronger midbass
- Pushing the Earlobe forwards from behind yields a snappier, fun sound
- I'd think that since we all have different ears, like our fingerprints, the end perception of sound could be skewed greatly
Yes, pinnae and ear canals differ. The rest of the auditory system also is likely to differ in certain aspects.
Here's the thing though.
2 people, one with outwards protruding big ears and another person with very small pinnae as good as flat against the head are both listening to a girl with guitar in the middle of the street.
We measure the response at the ear drum. Not surprisingly the response will be different.
The question is what both listeners actually hear.
The answer is simple they hear a girl with guitar and see and hear the 'truth'.
This is what the brain (hearing process) 'calibrates' to. Hearing everyday sounds are regarded as truth.
Put some filtering like dense cloth (or push your pinna and change filtering that way) and you will immediately hear differences.
Slowly change your hearing abilities (aging) won't register as such. At 60 we still hear the world the same as when we were young. Yet we don't.
Hearing is complex.
- FaceID my ears and inner canals with your infrared voodoo if you need to Apple, let me know the difference between what Amir and myself are hearing based on our physiology at least! So that i know when Amir says bright, i might know this as snappy, and now i know that on the graphs he's referring to that 3dB bump from 6-9khz which to me becomes 5-7.75khz gddammit
If it would really work this would not matter as long as both individuals use their own 'correction'.
The bright vs snappy bit is a vocabulary thing and mutual agreement on how to describe sounds.
4. Past conditioning?
- I've been brought up in a pretty silent environment, all audio sources were rather warm sounding (aha low quality), all my family members have low, smexy voices.lol. So i can't stand anything in the vicinity of 'shrill, sweet, tingy' etc. Basically, no triangles, no bagpipes and some chihuahuas
Preferences. Fortunately we have tone control for that.
5. Stuff sounds different to me after right after waking up, after an exercise in the evening, or after some alcohol and the other good stuffs.
Every one is experiencing this. Some are more aware then others. Some think they can 'hear past all this'.
It is one of the hurdles in perception and especially long term perception and 'remembering' sound.
6. I think measurements are EVERYTHING if we could make full sense of what the measurement is intending to tell us.
yes, the snag is in fully understanding all relevant measurements. Only very few people, often those that actually do measuring and live measuring, fully understand. It is the main reason why many folks say measurements say nothing. Because they do not understand the implications or do not combine all relevant measurement results. Then there is the snag of visualizing measurement results and understanding test conditions.
All those quantization tests, reviews between between lossy formats, what do they mean? Do they mean that MP3 discards 5% information between -30 and -22dbfs at 256hz, for a set time per each 441 samples? And then does it at 10dbfs lower at a bigger delta at the next harmonic? I mean, that's probably not true. Yes it's not bitperfect... but how... exactly where and exactly what changes?
Nulling will show the differences but the audibility of those measured differences is totally a different matter.
What proper codecs should do is remove that what is supposed to be masked by other signals so that signal does not have to be encoded as well. The lower the available datarate the more needs to be removed. At some point, depending on the recorded material, this becomes audible.
- After all, if you close your eyes and get someone to snap a finger at dead-center of your nose, pretty sure you could identify it's center. Move the finger 2cm to the right and snap again, you could possibly tell the difference. A difference of 0.09ms in arrival time (and the other bunch of room reflections) to your other ear has allowed you to spot this difference
- 44100hz = 441 samples per ms right? No. I want to know whats going on each 90000nanosec at least. Which is how many samples? lol
- I'd like to think we all have golden ears, all good up to a point, thereafter we just can't tie it to the data we are seeing and this is the missing link
- Or perhaps the data we collect is not nearly enough or the one we need, or we havent discovered it yet?
- And instructions are unclear. When they say we can't detect a 0.3db difference in volume change, does it mean 0.3db at a single frequency? Or does it mean.. go ahead, chop 0.3db between 250hz to 8khz on an EDM track, no you wont hear a change at all.
- If we can have a high speed camera filming the speed of light on at 1 trillion FPS for research proposes (ala Youtube) or how a vinyl breaks apart at 10000fps, I'd think we'd kinda need analyzers at the a sample rate of 50000khz to fully understand what we are studying no? You read that right. Show me what a fifty million sample rate RTA is seeing for a 2 second excerpt of a track stretched out to fill 10 minutes of analysis
record the fingersnap using binaural recordings. listen to it with a good headphone and record this on 44.1/16.
In both cases (reality and recording) close the eyes.
You will find 44.1/16 is perfectly capable of recording the timing differences.
As Blumlein already mentioned... it seems like you are not getting the sampling theorem. Few people do.
7. Measurements are NOTHING as well because of enviroment, factors that are not taken into account (or thought as no perceptible difference), physiology etc. I'm just a casual audiophile. I use the measurements to tell if something is 'good enough' or 'within the ballpark of what i'm looking for'. Then i'd use my ears to tell 'if this goes well with that'
Conversions from acoustic to electric and electric to acoustic are problematic for tons of reasons. In this case the measurements still have meaning. They don't mean nothing, they just aren't equally conclusive as electrical measurements.