Pointing out what are
not the right conditions.
More from Griesinger's paper:
If the listener is in the sweet spot most “main microphone” techniques reproduce horizontal localization well. In fact, they often can give a more evenly spaced image than a spaced technique as in figure 7.
But the major advantage for most users is the idea that these techniques produce a more natural sense the depth of the image. Sound sources appear to be behind the loudspeaker basis, and not in the loudspeakers themselves.
But the impression of distance or depth is not uniform across the image. Careful listeners will notice that instruments to the far left or far right of the microphone array often seem closer to the listener than instruments in the center, even though the instruments in the center are closer to the microphone.
The perception of the depth of a sound image is not a mystery. The perceived distance of a sound source depends on early lateral reflections, which means reflections that arrive at a listener from other directions than the direction of the source. A main microphone array can provide these reflections – but it only does it for some of the sources, particularly those near the center.
The major point of this paper is that you can achieve the same or better results through careful addition of early reflections generated electronically, and thus achieve both a natural sense of depth, and good horizontal localization over a wide listening area.
....
If we want good localization off axis we must use amplitude panning and not time delay panning for all sources in the front. The panning should be from center to left and from center to right - not from left to right. We want to use the center speaker – which means that a sound image that comes from the center should be at least 6dB louder in the center speaker than it is in the left and the right. These two requirements eliminate most single point microphone arrays. Even for the front channels only it is difficult to meet these criteria with pressure-gradient microphones located on a single stand, and it is impossible with omnis.
There is a further requirement on the microphone technique used for the two front channels and the two rear channels. The reverberation they pick up should be decorrelated. This means the left and right main microphones must use one of the combinations of patterns and angles given in Figure 20, or that they should be separated by at least the reverberation radius of the room.
The reverberation picked up by the rear microphones should also be decorrelated – and should be decorrelated with the front channels. In practice this means that the rear microphones must be separated from the front microphones by a distance of at least the reverberation radius.
If we eliminate all stereo main microphone techniques, and all closely spaced arrays, what is left? The situation is not a bleak as it looks. Most practicing engineers already space their rear microphones away from each other and from the front microphones. They also are already expert in the careful use of multi microphone technique. They use this technique for a simple reason – it works well in practice. I am only suggesting that it also works well in theory. Very few of the major recording engineers try to record surround (or stereo) from a single stand. This method seems to be reserved for schools and broadcast stations. They will simply have to catch up with the rest of us.