• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Holographic depth soundstage and 3d impression 2025

Yes, what you're saying is 100% true, but the speaker may be the bottleneck here. And it won't reproduce the depth of the recording if it can't do that. I place different pairs of speakers side by side in different configurations. The same recordings have a great holographic effect in some speakers, which disappears completely in others.
 
Yes, what you're saying is 100% true, but the speaker may be the bottleneck here. And it won't reproduce the depth of the recording if it can't do that. I place different pairs of speakers side by side in different configurations. The same recordings have a great holographic effect in some speakers, which disappears completely in others.
Holography, assuming recording, will depend on the lowest possible amount of distortion from the speaker, the quantity and quality of dispersion, and excellent interfacing with the listening environment. It's always about quality.
The more a system focuses, the more we will have a holographic reproduction that is faithful in tone.
 
Last edited:
One of the speakers that surprised me, the great holography, overdrives and distorts the sound very easily, so I don't know if it matters that much.
 
Over the decades what I have noticed is that recordings vary more than the equipment I listen on.
The position of large pieces of furniture, the speaker position and listening position all influence the impression of space paricularly imaging.
I have not owned a vast number of speakers over the last 57 years, 8 pairs I can quickly bring to mind, since I built my first ones, and my favourites have been the same for almost 30 years now but moving the speakers and listening position has made the most difference IME and the position of larger bits of furniture too.
 
My most striking experience with 'holography' (deep apparent stage depth), which I remember to this day, occured with a nearfield setup , 2.1 config , NHT SuperOne speakers circa 1998-9, placed well out from the front wall (~1/3 into the room) in a large-ish but damped living room, and toed in and me sitting at the apex of a Cardas 'golden ratio' triangle (remember that?).

In my recollection (no doubt rose-tinted so many years later) virtually everything from pan-potted rock music to minimal or multimiked orchestral , had almost palpable front-back and left-right placement of elements. aka '3D imaging' aka 'holography'.

Because this only happened with this particular situation (among many other setups with the same speakers, in different spaces), I have to conclude it's not mainly a function of the speakers, or recording, but of fortuitous acoustic geometry -- a certain placement in a certain space.
Yeah, and simplest way to get here is shrinking listening triangle size, real tine process if listener moves :)

As one gets closer to speakers, early reflections attenuate and get more delay in relation to direct sound. I speculate you get this perception of sound in almost any room with almost any speakers, just make the listening triangle small enough by listening, moving yourself closer and further from speakers and take note when "room sound", the loud early reflections, get kinda suppressed by your own auditory system. When you get there, further experiment with toe-in and positioning, try and get some envelopment. Its prerequisite to be close eough your auditory system switches state.
 
Last edited:
I have noticed that the subject of holographic sound, the depth of the sound stage, is often ridiculed here. However, having several pairs of speakers in one room and presenting them in different places, one issue stands out very clearly. It is a physical, tangible impression of the presence of the voice/instrument. And this, apart from the acoustics of the room, is due to some speakers, let's not talk about amplifiers. Let's not talk about a large stage to the sides, because when we move the speakers apart, most people will say wow, what a large stage. We are talking about holography and depth front to back that is tangible, you can tell that something is 2 meters in front of you and every 50 cm. Let those who have had contact with it and are delving into this experience speak up. Recommend some speakers that can do this. Thank you and I love you.
It’s funny how we have countless forum threads about “imaging,” “holographic soundstage,” and “3D depth,” yet the answers never seem to satisfy, even though the laws of physics already tell us with measurments (like REW) what the conditions are.

IMO imaging and depth aren’t mysterious audiophile magic; they’re the result of how the source was recorded (microphone choice, room acoustics, reverberation, phase alignment, etc.) and how faithfully our playback setup preserves those conditions.

In professional studios, this is well understood. The investment ratio is often something like 30% gear and 70% acoustic treatment/control. They’re literally shaping the air before touching the EQ.And of course, they can sculpt the soundstage exactly as they (guess mostly customer/record Compagnie) want with DSP/DAWs like Pro Tools.

Meanwhile despite quality speakers/gear most of us listen in reflective living rooms with hard walls, standing waves, and a glass table reflecting more sound than light. :facepalm:

So chasing perfect imaging through for instance endless gear swaps like speakers is a bit like tuning a race car on a gravel driveway. The fundamentals in most average listening rooms just aren’t there.

The irony is that our source material, whether LP, CD, or hi-res file was created in those carefully treated, time-aligned, phase-coherent spaces. Expecting that same result at home without addressing acoustics treatment and or DSP is wishful thinking.

In short, before we hunt for “holography,” an better speakers we might want to fix our listening room first, because that’s where most of the illusion lives, and only if those illusions were professionally recorded or deliberately added as meant/intended.
 
Last edited:
So, in your opinion, the speaker doesn't matter. Well, I'm waiting for you to show me the amazing 3D holographic effects from Creative computer speakers from the supermarket. I've been a sound engineer since 1996, please don't teach me how to achieve depth using reverb with a Bricasti M7 or Sony C800G. The speaker matters. Depth is created in recording studios that use the hated Yamaha NS10s, which lack depth, and Bowers Wilkins, which do. But you're moving furniture.
 
So, in your opinion, the speaker doesn't matter. Well, I'm waiting for you to show me the amazing 3D holographic effects from Creative computer speakers from the supermarket. I've been a sound engineer since 1996, please don't teach me how to achieve depth using reverb with a Bricasti M7 or Sony C800G. The speaker matters. Depth is created in recording studios that use the hated Yamaha NS10s, which lack depth, and Bowers Wilkins, which do. But you're moving furniture.
I think we’re actually on the same page when it comes to speaker quality I agree that speakers it placement for instance do matter a lot.

What I’m really focusing on is the pursuit of imaging, holographic soundstage, and 3D depth, and how room acoustics play a decisive role in achieving that.

I never said speakers don’t matter I’m simply emphasizing that without good room acoustics, all the speaker quality in the world can only do so much.
 
Last edited:
Of course, speaker placement is crucial. What I'm trying to say is that poor placement kills the imaging of holographic speakers. But good placement won't make weak non-holographic speakers holographic.
 
That's what Erin writes about Ex Machina speakers.
And that's the point. There are speakers that generate a holographic image. And there are those that create a super flat soundstage without depth or holography. I've experienced this myself. And speakers that measure worse can also be holographic. And I'm not saying it's magic, but this specific characteristic brings a lot of pleasure and a sense of physicality to vocalists and musicians. It's simply nice.
1000007274.jpg
 
Of course, speaker placement is crucial. What I'm trying to say is that poor placement kills the imaging of holographic speakers. But good placement won't make weak non-holographic speakers holographic.
For the farfield speakers, I stuck with my Vandersteens. If placed correctly disconnected from the floor to me, they still sound the best in terms of staging and imaging probably because they are built to be phase coherent and time-aligned from the start.

What I find a bit puzzling is that both my Vandersteens and other comparable speakers have been corrected with the same DSP software, including phase and time alignment correction. Yet, the Vandersteens still seem to have an edge, likely because their phase coherence and time alignment are designed into the speaker itself.

This makes me wonder: how effective is phase and time alignment correction through DSP really? Can it fully replace a speaker that’s inherently designed for these properties

Industrial rubber used for damping elevators spikes placed on a copper coin quite a difference in tight Bass response
1000007761.jpg
 
Last edited:
Per i diffusori farfield, ho preferito i miei Vandersteen. Se posizionati correttamente, staccati dal pavimento, suonano comunque al meglio in termini di staging e immagine, probabilmente perché sono costruiti per essere coerenti in fase e allineati temporalmente fin dall'inizio.

Ciò che trovo un po' sconcertante è che sia i miei Vandersteen che altri diffusori simili siano stati corretti con lo stesso software DSP, inclusa la correzione di fase e di allineamento temporale. Eppure, i Vandersteen sembrano comunque avere un vantaggio, probabilmente perché la coerenza di fase e l'allineamento temporale sono integrati nel diffusore stesso.

Questo mi fa chiedere: quanto è efficace la correzione di fase e allineamento temporale tramite DSP? Può sostituire completamente un altoparlante intrinsecamente progettato per queste proprietà?

Gomma industriale utilizzata per smorzare le punte degli ascensori posizionate su una moneta di rame, una notevole differenza nella risposta dei bassi
View attachment 488918
good question and I'll stop here
 
I corrected each pair of speakers using Dirac and other room correction software. Each speaker sounded better in many aspects after software phase and EQ correction. But if a speaker lacked that specific depth or other characteristics, it didn't achieve them after correction. It sounded better, more accurate, clearer, and more transparent, but nothing magical happened. I should add that I positioned each speaker in the room using many well-known and lesser-known methods, including measurements, a tape measure, laser, REW, and by ear.
 
In rooms things get complicated real quickly. It will mess up your FR and timing, so it would depend on both loudspeakers and how they interact with it. Some things we hear, some we reject. We can either get better at frequency or at timing resolution as far as auditory compression, but not both at the same time.

Here are some "pretty graphs"- Morlet CWT(1/24), where you can see most of the ugly room effects (modes, SBIR, mains noise, fridge, desktop PC with 10 fans), you name it.

First is sort of heat map with no Z axis perspective:

Morlet CWT no Z.jpg



Some perspective to hopefully highlight what is potentially heard, and what is usually out of scope of immediate attention:


Morlet CWT.jpg


Equal perspective of 1 on all axis:

Morlet CWT Z.jpg


Some wishful thinking about what I actually hear if everything's cleared up a bit, as in clarity potential of signals that are transient in nature, if you will:

Morlet CWT vector average.jpg


IMO/IME, it is neither this nor that and still highly complex in making sense of it when not only ears, but the brain in between is involved. This above from a system with some horizontal and vertical directivity control, in a time-intensity trading setup. Also stereo bass setup with some modal control (mono low frequency transients are perceived as a virtual, phantom source).

I must say that if you can create some favorable conditions (loudspeakers and rooms combined), then yes, the perception is still recording dependent, and that's about your best guess...
 
Recording in stereo is why we get specific image placement, IME distance from the front wall most effects soundstage depth and when a loudspeaker manufacturer takes steps to match speaker pairs the tighter the 'stage and image placement is. Room acoustics and speaker placement within can't be ignored.
 

This talk by Francis Rumsey I think is highly relevant for the topic where you may get introduced to the terms such as "decoloration", "virtual focused source", "uncanny valley", etc.

Upon introduction, about 15 minutes into the talk he goes on with virtual source localization and perceived coloration or a lack thereof in two channel stereo, and later he goes on comparing it with advanced systems and what sort of problems had been identified when more and more channels are introduced. Being spatially superior of course, but still being a subject to timbral problems and aliasing artifacts, there are some "dirty little secrets" that may influence preference in such systems.

The key concepts here are various applications and perception of timbral coloration with regards to the number of channels. Two channel stereo sounding better than it has any right to in comparison I think may come as a surprise.

Being a lengthy but worthy presentation, you all have a nice day. :)
 
Yes, the sound stage is certainly an element in the recording.

Ai slop summary:

That is a crucial distinction in audio engineering. The short answer is: No, the standard pan control in most DAWs does not apply any automatic phase (time) adjustment.
Standard panning in DAWs works almost exclusively on amplitude (volume), not time.
How Standard Panning Works
The classic pan pot (the knob you turn left/right) relies on the Inter-aural Level Difference (ILD), which is one of the primary ways humans localize high-frequency sounds.
* Amplitude Control: When you pan a mono source, say, 75\% to the right, the panning algorithm (called the pan law) does two things:
* It reduces the amplitude of the signal sent to the Left channel.
* It keeps the amplitude of the signal sent to the Right channel higher.
* Pan Law: To prevent the sound from getting louder when it's panned to the center (where the signals from both speakers acoustically sum), a Pan Law is applied. This is a subtle reduction in volume to both channels when the sound is centered
* The formula dictates how the gain ratio changes between Left and Right channels as a function of the pan position

* This entire process is purely an amplitude calculation. Time (phase) is not involved. .
Phase, Time, and Psychoacoustics
The reason standard panning is not done with phase (time delay) is that the effect would be frequency-dependent and would cause severe cancellation issues when summed to mono (a common compatibility check).
The Role of the Engineer (Phase/Time)
The engineer must calculate and implement phase/time adjustments themselves, but only when using more advanced or specialized techniques.
* Inter-aural Time Difference (ITD): To create a more realistic, "holographic" placement (especially for lower frequencies, which humans localize using ITD), an engineer must manually apply a tiny delay (e.g., 10 to 700 microseconds) to one of the channels.
* This is not the standard pan control, but rather an effect like a Haas Effect (or Precedence Effect) delay or a specialized binaural panning plugin. These effects explicitly manipulate the time/phase relationship.
* Phase Alignment (Mono Compatibility): The most common reason an engineer manipulates phase is to correct issues that already exist, usually from multiple microphone recordings (e.g., a kick drum mic and an overhead mic). This is done using:
* Polarity Inversion: Flipping the polarity (180^\circ phase flip) using a button.
* Time Shifting: Nudging a track forward or backward by a few samples/milliseconds to align the waveforms.
Advanced Panning Methods
Some specialized panning plugins (often called binaural or spatialization tools) go beyond simple amplitude panning. These tools use complex DSP—like Head-Related Transfer Functions (HRTFs)—which do involve both level (ILD) and frequency-dependent time differences (ITD) to simulate how your head, ears, and torso physically change a sound wave before it hits your eardrums.
 
Yes. As I mentioned before, having owned a number of time/phase coherent Thiel speakers, as well as many others, there has always been a level of precision, focus and density to the sonic images on the Thiels that make listening to other speakers seem a little bit more vague and swimmy in their imaging. It’s not something usually obvious from just listening to a different speaker of itself, because many speakers seem to image quite precisely. It’s only when I hear them compared to a Thiel speaker that the other speaker’s imaging is less focussed and less corporal.
It’s like the Thiel speaker is “ lining up” all the Sonic information in the recording more precisely, and putting a finer lens on the focus.

I still don’t know whether this has anything to do with the time phase coherence, or perhaps more to coax mid tweeter arrangement.

Having said that, having listened to speakers like KEF quite a number of times, which use the coax arrangement, they didn’t seem to have the focus and density of the Thiel speakers, and the first time I noticed the particular precision of the Thiel versus others was in showroom during the 90s, hearing the Thiel 3.6, which was time phase coherent, but didn’t use a coax arrangement.
I have a pair of Q7 meta and they are perfectly able to produce an holographic soundstage.

Also, I'm not sure everyone in the thread is on the same page. Some speak of the physical feeling of presence, others of imaging. I'd imagine the effect is created by good imaging + great bass drivers. A solid, clean reproduction of lows and general tonality are half of the story imho. But that's not holography in itself, but impact. The Chesky record classic ultimate demonstration disk does a good job in explaining these terms with audio example, and holography / imaging is a different thing from impact.

Anyway, I was able to have holographic soundstage with crosstalk cancelation on a variety of speakers. I dsped a b&w Zeppelin to produce a room filling soundstage, go figure... It was almost uncanny hearing these sounds there and there was nothing. If you want to improve it without resorting to dsp, you need to have both channels matched not just in FR but in time, early reflections in etc below at least - 15db and generally high iacc scores.

The rest is speaker placement. For my q7 I'm doing basically the opposite of what erin suggest for kefs not really against the wall and I'm absorbing their first reflections. Listening in a fairly big room with high ceiling, 11.8x4.85x3.15 m, 180 m3 volume, with the left side having 3 big 2.8x2.8 m windows. I have two big gobos 120x180 cm at the side reflection points. Can hear scale, depth, width and generally stuff dancing around between the speakers and me.

I use dsp room correction with audiolense and home audio fidelity room shaper. I would say the kef lacks a bit of the impact from the convincing effect, but as far as 3diness of the scene they sound like there's ambiophonic on even when it's off
 
Last edited:
I like how my big beamy electrostatic hybrids place a "deep" image in front of me. And, if it is in the recording, I get all 180 degrees.

Once I listened through* them nearfield, about a foot from each speaker, it was scary. Need to try that again someday.

---

* Another time, closed my eyes and spun around and tried to locate a speaker by sound.

My nose bumped into it (they're taller than me) but it still sounded like it was two or three feet away.

---

The medium-big Magicos in the Big Room at the Big Show with the Big Peripherals perked my ears. Don't have the dollars or the space for them, though.
I always sit as close to the speakers as possible. It reduces the audibility of adverse room effects. In my own system, my ears are 170cm (67 inches) from the baffle of my very large DIY quasi-dipolar speakers (see profile photo.) At this distance, they "just" begin to form a cohesive image with a few inches to spare.
 
Funny, 8 pages of postings and no one has mentioned headphones. As if 3D/holography does not apply there.
 
Back
Top Bottom