You said it opened-up dimensionally. Width and depth specifically.
Because I focus on what the ears pick up in comparison to the other songs. Nothing else.
You said it opened-up dimensionally. Width and depth specifically.
I did make that recognition. Unfortunately the postings so far mostly are individual anecdotes.
I posted some links to research regarding depth perception - but there has been no comments to those.
I missed those. Please remind me, I will also try and find them. I am very curious.
I posted some stuff on HRTF/HRIF - see these posts:I missed those. Please remind me, I will also try and find them. I am very curious.
I posted some stuff on HRTF/HRIF - see these posts:
https://www.audiosciencereview.com/forum/index.php?threads/how-do-we-perceive-“soundstage”-and-“imaging”.10623/post-294650
https://www.audiosciencereview.com/forum/index.php?threads/how-do-we-perceive-“soundstage”-and-“imaging”.10623/post-294742
There's a great set of vids from Prof. Land at Cornell talking about localization... it's a college-level course for FPGA development but has some great info on how humans are thought to perceive aural location cues...
As to modern pop type music, it's so artificial to begin with - you're simultaneously inside the piano, in the middle of the drum kit, in the middle of the guitar amp, horn, etc... while also being right in front of the mouth of the singer. The artform of pop recording that pioneers like Sir George Martin, Quincy Jones, Roger Nichols and others created is that it IS an artificial aural landscape.
I think the only way that we will ever get to a method of TRUELY realizing any acoustic environment is through a holographic method of controlling the listening environs' air molecules. I doubt that any conventional multidriver transducer type system will ever work.
I have not yet read the replies to your initial post but I question whether any headphones or in-ear phones actually can create or, rather, re-create a sense of space.
Proximity/early reflections... (David Griesinger)
Interesting concept, tho I still find it lacking in representation of a concert hall. ...
The concept of Ambiophonics + domestic concert hall is about creating concert hall sound with existing recordings. It can be 2.0, 5.1, 7.1 or ATMOS.
https://www.ambiophonics.org/the-home-concert-hall
It would be interesting to test the impulse responses on that page; however, I suggest not using convolution of sampled IR's but rather "algorithmic" reverb. Apart from anything else, the sample is "static"--even e.g. small air currents in the "real world" mean that reverb is constantly slightly changing.
The concept of Ambiophonics + domestic concert hall is about creating concert hall sound with existing recordings. It can be 2.0, 5.1, 7.1 or ATMOS.
https://www.ambiophonics.org/the-home-concert-hall
But it would only be in the sweet spot of the speakers.
In a real hall, as you walk around, the sound from stage changes. Recall that the measure of sound absorption is the Sabine, based on cushions:
http://waywiser.fas.harvard.edu/objects/11606/sabines
Me thinks that the way the sound interacts inside of a place like that hall cannot be replicated to a moving listener even with a zillion channel system of drivers in a relatively small room.
Same with a concert using a PA system, tho that's be a bit easier since the original acoustic environs would also be thru an x-channel system of drivers/cabs. But even then, the instantaneous amount of cubic volume of air that something like a large flown, in-line array with multiple 18" subs or servodrives can move would dwarf any home system.
For example, go get a DVD and play it in the best "pinky-pinky" rich person's home theater you can find, then go see/listen to the same flick in a real theater.
For me the difference in my mid-fields with a 12" three way (I use them as far fields) is striking as compared to near fields or some 5:1 home thing that has tiny drivers. The effect larger far fields have on the perceived transients of things like a tympani or a kick drum and low toms dwarfs anything I've heard come out of small driver systems.
IME of "generic" synthetic HRTF-based processing is that it is very difficult to position sounds frontally (as in e.g. "centre channel" location,) and generally difficult to create much sense of distance. For example, sounds can be heard from behind--but it's as if someone's "breathing down your neck" (complete with "tingling" sensations!)
IIRC, this is exactly what has been found in (some of) the literature.
Another aspect is that the localisation sounds spatially "warped," although with subsequent experiences (after listening to synthetic "binaural" material on headphones) of the "real world," it also becomes (more) obvious that our aural perception is spatially warped and error-prone, anyway.
Creative Labs' "Super X-Fi" system works by generating custom HRTFs based the on analysis of photos (of your ears.) I have not tried it...
my 30 ambiance speakers.
Stereo is anti social. The best sound is always at the sweet spot meaning it is confined to one person. In a concert hall, the best sound usually confined to first few rows but some may find other spots to their liking but that’s subjective.
Ambiophonics is intended to recreate realism with the existing recordings. That means any format where stereo is part of them.
In a good concert hall the RT would be around 2s. To achieve the even and sustained decay the design is complex and the final results may not be what one intended.
With artificial ambiance, the user decides what RT60 is the best for the recording. We need not be concerned how the long RT would affect the inteIligibity as we are in control of the level of the reverberation. That is something not possible in a real hall because the RT is determined by the size and material used . I have a choice from choosing 0.3s to 3s and even more like the Meyer system. Even now I cannot utilize the full ambiance as the i9-9900K CPU already touching 82 %.
But mere words not going to convince audiophiles or even audio engineers who seemed to overlook the importance of psychoacoustics for the sound to be perceived as real. This is the cheapest and better solution for existing audiophile system.
Of course you won’t find a review of it in the Stereophile because there is no product to sell and it is hard to convince the readers that Amplifier and cables would not make 3D sound as good as IACTX . The institute is open to public.
It gives you a better sound than multi channel system. For an example a violin can be made to sound like this with my 30 ambiance speakers.
Just because I could do them doesn’t mean I am without a choice. From the numerous visitors I had, I can safely say that the right RT is very much to individual taste and there seemed to be no exact number. Some like the classical with 2.2s RT and some with 1.7s but not one listener preferred the natural RT of the room compared to the generated ambiance.
In a real hall, as you walk around, the sound from stage changes. […]
Me thinks that the way the sound interacts inside of a place like that hall cannot be replicated to a moving listener even with a zillion channel system of drivers in a relatively small room.
Same with a concert using a PA system, tho that's be a bit easier since the original acoustic environs would also be thru an x-channel system of drivers/cabs. But even then, the instantaneous amount of cubic volume of air that something like a large flown, in-line array with multiple 18" subs or servodrives can move would dwarf any home system.
For example, go get a DVD and play it in the best "pinky-pinky" rich person's home theater you can find, then go see/listen to the same flick in a real theater.
But it would only be in the sweet spot of the speakers.
and BACCH4Mac which are similar personal-hrtf systems do include head-tracking which suggests that it's probably rather important, but this ideally requires support from the convolution engine - you can probably do it in a mixer stage that accepts a large number of individual convolution outputs, but that would probably be computationally more expensive.