• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

What is audio meant to do?

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,741
Likes
38,992
Location
Gold Coast, Queensland, Australia
I always felt that the opposite would be the ideal.
That is that the listening room should be as close to anechoic as possible.

The first Yamaha DSP-1 processor was designed to be used with a 'dead' room, 6 speakers placed strategically, six channels of amplification and up to 88 early reflections. The story I was told was it was created so they didn't have to fly Pianos (at great cost) around the world so performers could hear them in their venues of choice when deciding whether to use Yamaha as their piano of choice. Apparently Elton was one of the performers they worked with to get it right- or so I was told. Could be BS, but there's probably some truth in it- he does play Yamaha...

When using recordings that were close miked and not containing venue reflections/reverberations, it is extremely convincing. I have some photos of the sophisticated mic array and gear used to record the impulse responses in the various venues around the world. It was a very expensive exercise for Yamaha. The conversion to DSP and embedding the acoustics in silicon cost many millions.

I remember hearing it first in 1985, in a treated small ballroom at a hotel convention, run with 6 NS1000M monitors and 3 big M80 power amps. They ran the whole lot through the MVS-1 (IIRC) master volume unit (very rare to see one of those) to control all six channels at once. This was after all, 1985.

It was truly amazing. Apart from the noise floor of the DSP-1 which was the issue.

I still have a DSP-1 in my storeroom, along with a few DSP-100s. They were incredible pieces of gear for their day. The concept of anechoic (or the like) recordings along with recorded and stored venue acoustics that could be recalled at the touch of a button, is still a valid concept IMO.

We used to have a ball making giant Munster cathedrals with reverb times of 60 seconds or tweaking the various Jazz Clubs to sound like the roof was going up or down and the walls disappearing.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,201
Location
Riverview FL
=> IS THERE A DIFFERENCE BETWEEN SOUND AND AUDIO?

Sound comes from any source.

There's rain and nearby lightning creating quite a bit of sound right now.

Audio comes from little boxes usually playing back a recording of sound.

Currently, the audio here is the HDRadio.
 
Last edited:
OP
andreasmaaan

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
=> IS THERE A DIFFERENCE BETWEEN SOUND AND AUDIO?

Great question. IMO they are both technically the same thing, but audio is the word we specifically tend to use in the context of recording and reproduction.

But, seriously, you seem to be implying that a mic channel cannot walk and chew gum at the same time. And, that it cannot pick up multiple sounds at the same time, be they direct on axis or off axis, real or reflected sound, but only one sound source at a time. I don't think this squares with reality at all. Mics pick up all sounds within their pickup pattern. They pick up a sound field, not individual instruments, not indidual spatial or reverberant signatures. And, in the case of Mch, the diffuse nature of reflected hall sounds is consistent with the less precise nature of phantom images acquired by the mics in multiple channels from different perspectives. It all comes together on playback from a congruent Mch speaker array. It is a complex system, with a complex sum greater than the sum of the individual components.

From my discussions with Mch classical recording engineers, I can assure you that in most cases every attempt is made, in spite of mic channel count, to provide a proper balance between direct and reflected sound in order to maintain a satisfactory spatial image of the front soundstage and the reflected sound field heard by the audience. They do mix and master using the same angular speaker array as specified for home playback. You only assume that their mic technique is detrimental to proper spatial capture, but without evidence. And, again, there are examples far too numerous to name of recordings successfully demonstrating this from minimalist to extensively multi-mic'ed. I think your comment on the inadequacies of the engineering indicates that you likely may not have actually heard those numerous examples in a proper system. I have, and, personally, the effect does not sound artificial at all, certainly not compared to the limitations of stereo.

Ok, we are definitely getting closer to each other's POVs here I think :)

I'm definitely not implying that a mic "cannot pick up multiple sounds at the same time, be they direct on axis or off axis, real or reflected sound". Quite the opposite in fact. I'm saying that a mic always pics up both direct and reflected sound, and that this is the heart of the problem, because the mic receives these sounds from a multitude of different angles/points in the performance space, whereas the speaker reproducing the recording plays back all these sounds from only one point in space. Or to put in another way, the sound field in a performance space comes from every direction, but when reproduced on a speaker comes from only one direction.

This is the essence of the problem for me, or at least, the reason why loudspeaker reproduction can never hope to aim for recreating the original acoustic event.

I haven't discussed this with classical recording engineers much but what you say about your discussions with them is consistent with what I would expect. When you say that at its best "the effect does not sound artificial at all" I 100% agree with you. I think the effect can be extremely enjoyable and quite convincing. My point was not that it sounds artificial, but that it is artificial, i.e. the illusion of a natural-sounding soundfield is achieved via artificial means. I believe this must be the case given the problem I outlined in the OP and rephrased in the first paragraph of this post.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Sound comes from any source.

There's rain and nearby lightning creating quite a bit of sound right now.

Audio comes from little boxes usually playing back a recording of sound.

Currently, the audio here is the HDRadio.

Maybe the pipe is the sound and audio is the picture of the pipe?

call-of-peaks-1943(1).jpg
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
@maverickronin said something yesterday that seems to have passed by unnoticed. He said
Perfect beam forming to your ears will remove most of your HRTF as well as the acoustic crosstalk which stereo mixes rely on for proper spatial localization
I believe this to be the case: proper Blumlein-style stereo needs there to be crossfeed from each speaker to the 'wrong' ear.

The Blumlein pair microphone arrangement converts direction into volume difference in the recording; the speakers & crossfeed convert volume difference to an effective time-of-arrival difference at the ears upon playback. Panpot stereo is actually proper Blumlein-encoded stereo.

If this is correct, what does it mean for listening to the same recording in headphones versus speakers? What does it mean for the validity of BACCH-style crosstalk cancellation for some types of recording?

When do recording engineers use Blumlein pairs, and when do they use spaced microphones? Presumably this should matter for whether playing back over speakers or headphones - or a compromise for both..? Can a (consistently-made) recording be processed electronically for proper reproduction for both systems?

How does the crosstalk aspect work with a centre channel? Can the Blumlein-style idea simply be scaled to N discrete channels or, technically, does it only work with two?

Finally, is this a 'duh!' observation, or a subtlety that isn't all that well known..? (Or plain wrong?). It's not something that I really ever worried about - simply wanting to build two 'straight wires with gain'. But all this talk of stereo, crosstalk, headphones, speakers, multi-channel etc. makes me curious...
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
LAWRENCE JACOBY (b. 1936)

May I introduce you to Lawrence Jacoby? You probably know him already, don’t you?

Well, it appears the character is based on Terence Kemp McKenna. From Wikipedia:

«...an American ethnobotanist, mystic, psychonaut, lecturer, author, and an advocate for the responsible use of naturally occurring psychedelic plants».

The reason I started to think about Dr. Jacoby is because it seems like he knew exactly how to bridge the gap between the pipe and the picture of the pipe, the sound and the audio.

Good sound need not be a pipe dream! So if you really intend to and want your high-end system to bring the event to you, I suggest you start growing some psychedelic plants.

But don’t forget to be responsible!

o_O:Do_O


Background: http://twinpeaks.wikia.com/wiki/Lawrence_Jacoby

https://www.thevintagenews.com/2017...erence-mckenna-the-american-psychedelic-guru/

https://en.m.wikipedia.org/wiki/Terence_McKenna

300x300.jpg
 

tomelex

Addicted to Fun and Learning
Forum Donor
Joined
Feb 29, 2016
Messages
990
Likes
572
Location
So called Midwest, USA
Great question. IMO they are both technically the same thing, but audio is the word we specifically tend to use in the context of recording and reproduction.



Ok, we are definitely getting closer to each other's POVs here I think :)

I'm definitely not implying that a mic "cannot pick up multiple sounds at the same time, be they direct on axis or off axis, real or reflected sound". Quite the opposite in fact. I'm saying that a mic always pics up both direct and reflected sound, and that this is the heart of the problem, because the mic receives these sounds from a multitude of different angles/points in the performance space, whereas the speaker reproducing the recording plays back all these sounds from only one point in space. Or to put in another way, the sound field in a performance space comes from every direction, but when reproduced on a speaker comes from only one direction.

This is the essence of the problem for me, or at least, the reason why loudspeaker reproduction can never hope to aim for recreating the original acoustic event.

I haven't discussed this with classical recording engineers much but what you say about your discussions with them is consistent with what I would expect. When you say that at its best "the effect does not sound artificial at all" I 100% agree with you. I think the effect can be extremely enjoyable and quite convincing. My point was not that it sounds artificial, but that it is artificial, i.e. the illusion of a natural-sounding soundfield is achieved via artificial means. I believe this must be the case given the problem I outlined in the OP and rephrased in the first paragraph of this post.


You said: "I'm saying that a mic always pics up both direct and reflected sound, and that this is the heart of the problem, because the mic receives these sounds from a multitude of different angles/points in the performance space, whereas the speaker reproducing the recording plays back all these sounds from only one point in space. Or to put in another way, the sound field in a performance space comes from every direction, but when reproduced on a speaker comes from only one direction"

I agree! This is really the key to our two channel or even multi channel reproduction. While I would agree by actual hearing fine multichannel systems that when you get them balanced they sound much better than two channel stereo, we are still, by having these point sources, and experiencing these sounds in our own rooms, missing probably, even with great multichannel systems probably a hand waving guess of 85% of the actual sound field you would experience at the concert. The road to audio hell is when folks keep searching for more than what muti channel can do as far as "getting there". The science says expect (you pick the value, I will give benefit of doubt and say we are only getting15% of the entire soundfield. If you serve me 15% milkfat milk it will never taste like full fat milk. And I don't care how much you shine your glass, change its color, add a green rim to it, set it on wood blocks, isolate it from the ground, spike it to the ground, cluster it with diamonds, use crystal instead of glass, use old glasses vs new ones, on and on, trying to get more than that 15% is just a waste of time, you simply cant get there from here.

However, you can prefer vinyl or use your favorite speakers or whatever to try to make that 15% more satisfying, and I think naturally folks do that when they say hey, I like that stereo systems sound over that ones, I like those speakers better, I like digital because it sound cleaner to me, etc.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,793
Likes
37,701
In principal, seems directional microphones, or controlled directional arrays of microphones, could do DSP on the signals they record and process out reflections to hear exactly what human ears/brain hear. The human ear is like a pair of omni pressure mics, hidden in a resonant tube, separated by 7 inches, and a directional mini-horn on each side, that uses the two channel input along with positional data to process the sound we hear. I see no reason this couldn't be attacked this way and result in a system that hears exactly what a person would hear. Compensate playback with IEM's and you'll get perfect results. Genuine sound of you being there. Wherever their is. Binaural tries this because it is easy. Doing the whole thing is complex, but seems imminently doable if work was to focus on solving each problem instead of taking good enough, enjoyable enough shortcuts.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,915
Likes
16,748
Location
Monument, CO
Back when I was recording I would typically use an X-Y pair mounted high near the group and an omni some feet back as an "ambience" mic. It actually worked pretty well; the X-Y pair provided the "up front" directionality, and the back mic mixed in at a lower level helped provide a bit of "spaciousness" to the recording.
 

Guermantes

Senior Member
Joined
Feb 19, 2018
Messages
486
Likes
562
Location
Brisbane, Australia
It's interesting that we associate "audio" with modern sound technologies whereas the etymology of the word ties it more to perception. I've always considered "sound" to be the overarching term and the one most closely connected with physics while "audio" relates to our use of sound -- it's entanglement with techne.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Could we say that audio is supposed to perform according to specifications, targets, with highest possible precision.

And those targets are set according to theory based on physics and psychoacoustics.

The target is a transparent goal; no secret sauce or magic there.

The implementation, however, i.e. the precision with which the manufacturer attains his transparent targets, that’s where the skill is, what we may call the magic or the secret sauce.

This may sound like an egghead speaking, empty sophistry. I think it’s not, because people tend to mix where to put the magic and where to be fully transparent.

Just recently, I read this statement from a headphone manufacturer:

«Headphone measurement is an art as much as a science».
Source: https://www.audeze.com/headphone-measurements

Is @amirm ’s measurements as much art as science? No wonder why the Schiit people complain.

So what’s my point? If you don’t have a clear vocabulary, if you don’t know where to be transparent (be transparent in your targets!) and where to have secrets (how you managed to minimize error and maximize precision in attaining those targets), you will not be able to have a meaningful conversation on audio. Instead of discipline (which is heavy because it means a certain form of brain activity, cfr. Kahneman) people like to talk freely (another, easier more joyful form of brain activity, cfr. Kahneman) about the sound - which is to be fair the final judge - but the way to the best sound goes via a good grip on audio science and how to apply that science into a product.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
An article quite relevant to my previous (possibly baffling?) post regarding microphone techniques and audio reproduction.
http://www.regonaudio.com/MICROPHONE THEORY word.htm

R.E. Greene talks about a recording by Boyk that tests various stereo microphone techniques for their absolute accuracy in reproducing source direction when listened to over loudspeakers.
...As it happens, one of the test set-ups reproduces reality so well as to constitute a reference for all the rest. This is the Blumlein ribbon-microphone set-up, listened to with the playback speakers positioned according to Blumlein theory, i.e., speakers pointed directly at the listener, exactly equidistant from the listener, and each speaker 45 degrees from the central axis.
So it seems that the original Blumlein patent really does correspond with theoretical perfection - in one important sense at least.
An interesting theoretical point arises here. In Blumlein stereo, a source is always picked up exactly simultaneously in the two channels; this is indeed the only possibility in one-point miking... Thus... the initial arrivals of the "clicks" in the two ears are exactly simultaneous... So the time-of-arrival localization mechanism would say that the source was exactly centered, no matter where it actually was. It is well known that nonsimultaneous arrivals pull the apparent position of a source off-axis, and pull so strongly as to override contradictory amplitude information to a surprising extent.

One might thus a priori expect that simultaneous arrivals would be strongly centerpulling. If this were so, then the Blumlein "click" tests on the recording would tend to collapse toward the center, yielding a distorted pattern. No such collapsing does happen, in my experience and, as verified by Boyk, in the experience of other listeners so far. ...the Blumlein listening results suggest that exactly simultaneous arrivals are interpreted as non-information, rather than as centering information, if other directional information is present. Here is clearly an interesting psychoacoustic phenomenon to be investigated further.
The question is: has he compared the overall composite signals that arrive at each ear, rather just the theoretical timing of absolute first arrival? I believe that the brilliance of the Blumlein technique could be that the crosstalk combined with the direct sound at each each ear creates a composite signal that - if you were to cross-correlate the signals like a human brain seems to - would give a stable time-of-arrival difference. It may even be better than that: it might even be substantially stable as the listener turns their head or even moves around.

If so, I have no idea how someone could be so brilliant as to work that out in one shot like Blumlein seemingly did. And yet he didn't get a mention in a documentary on the history of audio that was linked to a few weeks back...

Mr. Greene finishes with:
The failings of the spaced-omni technique shown here must not be taken to mean that fine recordings cannot be made with this and other spaced-microphone techniques... However, the test does show first why a third, center-fill microphone is often needed, and it shows also that a certain restriction in our expectations of image focus and accuracy is appropriate when we listen to spaced-mike recordings. This restriction is especially relevant in terms of equipment evaluation; we cannot expect equipment to reproduce what is not on the record.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,793
Likes
37,701
An article quite relevant to my previous (possibly baffling?) post regarding microphone techniques and audio reproduction.
http://www.regonaudio.com/MICROPHONE THEORY word.htm

R.E. Greene talks about a recording by Boyk that tests various stereo microphone techniques for their absolute accuracy in reproducing source direction when listened to over loudspeakers.

So it seems that the original Blumlein patent really does correspond with theoretical perfection - in one important sense at least.

The question is: has he compared the overall composite signals that arrive at each ear, rather just the theoretical timing of absolute first arrival? I believe that the brilliance of the Blumlein technique could be that the crosstalk combined with the direct sound at each each ear creates a composite signal that - if you were to cross-correlate the signals like a human brain seems to - would give a stable time-of-arrival difference. It may even be better than that: it might even be substantially stable as the listener turns their head or even moves around.

If so, I have no idea how someone could be so brilliant as to work that out in one shot like Blumlein seemingly did. And yet he didn't get a mention in a documentary on the history of audio that was linked to a few weeks back...

Mr. Greene finishes with:

Yes, various microphone techniques have been experimentally tested with quite some rigor several different times. Blumlein is the most accurate in a high fidelity sense of direction. The fly in the ointment is with your speakers at an angle of 90 degrees. In at least one test Blumlein was still best with speakers at 60 degrees though other techniques were judged to be very close. Basically taking into account how human directional hearing works Alan Blumlein developed the simple technique that is rather accurate for directionality.

I find it interesting that no one has a theoretical basis upon which to base 5 channel surround recording. Various tests have been done on the multitude of surround recording methods with results not being consistent as to how one gets best fidelity in multi-channel. Stereo recording initially was based upon how human hearing works to create the illusion. Multi-channel not so much. Just an interesting effect from the video world.

Finding Blumlein recordings is difficult as they are rarely done. For everyone of those are several that employ a Blumlein pair flanked by spaced omnis. And even those are extremely uncommon. Until a few years ago Chesky recordings were done with just a Blumlein pair. They are done quasi-binaurally now.

At one time I had tall panel speakers 90 degrees with room to have the sound leave the rear side travel 5 feet and bounce at an angle off the rear wall with 6 feet of width for that reflection to travel before hitting a side wall. So all things considered at least the directional reflections horizontally would require 24 milliseconds to reach your ear from the rear panel. I don't know if this use of bidirectional speakers playing recordings of bi-directional microphones would have enhanced fidelity vs cone speakers. I do know it sounded pretty wonderful with Chesky recordings.
 

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,311
Location
Midwest, USA
I find it interesting that no one has a theoretical basis upon which to base 5 channel surround recording. Various tests have been done on the multitude of surround recording methods with results not being consistent as to how one gets best fidelity in multi-channel. Stereo recording initially was based upon how human hearing works to create the illusion. Multi-channel not so much. Just an interesting effect from the video world.

I've always assumed it's because stereo setups as described above have a smaller "sweet spots". You can't have a whole theater, or even a family sitting on a couch, be exactly in between two speakers angled at exactly 45°. Adding extra speakers in the directions sounds are supposed to come from makes your exact placement in between them less important.

The fact that this isn't as psychoacoustically exact may be indicated by the fact that Dolby keeps adding new channels every few years...
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,305
Likes
9,875
Location
NYC
I've always assumed it's because stereo setups as described above have a smaller "sweet spots". You can't have a whole theater, or even a family sitting on a couch, be exactly in between two speakers angled at exactly 45°. Adding extra speakers in the directions sounds are supposed to come from makes your exact placement in between them less important.

The fact that this isn't as psychoacoustically exact may be indicated by the fact that Dolby keeps adding new channels every few years...
Perhaps but, in my experience, no one else in my entourage cares about it. I sit in the money spot and we are all happy.
 
OP
andreasmaaan

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
An article quite relevant to my previous (possibly baffling?) post regarding microphone techniques and audio reproduction.
http://www.regonaudio.com/MICROPHONE THEORY word.htm

R.E. Greene talks about a recording by Boyk that tests various stereo microphone techniques for their absolute accuracy in reproducing source direction when listened to over loudspeakers.

So it seems that the original Blumlein patent really does correspond with theoretical perfection - in one important sense at least.

The question is: has he compared the overall composite signals that arrive at each ear, rather just the theoretical timing of absolute first arrival? I believe that the brilliance of the Blumlein technique could be that the crosstalk combined with the direct sound at each each ear creates a composite signal that - if you were to cross-correlate the signals like a human brain seems to - would give a stable time-of-arrival difference. It may even be better than that: it might even be substantially stable as the listener turns their head or even moves around.

If so, I have no idea how someone could be so brilliant as to work that out in one shot like Blumlein seemingly did. And yet he didn't get a mention in a documentary on the history of audio that was linked to a few weeks back...

Mr. Greene finishes with:
The Blumlein pair microphone arrangement converts direction into volume difference in the recording; the speakers & crossfeed convert volume difference to an effective time-of-arrival difference at the ears upon playback. Panpot stereo is actually proper Blumlein-encoded stereo.

This is a truly interesting direction to take the discussion. The first question I would love answered about the test procedure is: what was the frequency response of the click? It’s established that humans are very sensitive to interaural amplitude differences for source location in the high frequencies, while being more sensitive to interaural phase differences when locating sources in the lower frequencies (the transition range is thought to be around 1000Hz IIRC).

Further (again IIRC - I’m just on a train at present and have no time to check all this), due to the slower cycling of sinusoids as frequency decreases, our brain’s recognition of a low tone is slower than for a higher tone. And to distinguish an interaural phase difference at low frequencies requires similar degrees of additional time.

So the outcomes of this test may be simply explicable by the nature of the test signal chosen, ie a primarily high frequency transient would favour Blumlein since Blumlein creates the greatest interchannel amplitude difference. A held tone or sample of a voice or instrument may have resulted in a rather different outcome.

That’s one possible take anyway...

(I’m no enemy of Blumlein btw - just speculating on what’s happening here.)
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,793
Likes
37,701
This is a truly interesting direction to take the discussion. The first question I would love answered about the test procedure is: what was the frequency response of the click? It’s established that humans are very sensitive to interaural amplitude differences for source location in the high frequencies, while being more sensitive to interaural phase differences when locating sources in the lower frequencies (the transition range is thought to be around 1000Hz IIRC).

Further (again IIRC - I’m just on a train at present and have no time to check all this), due to the slower cycling of sinusoids as frequency decreases, our brain’s recognition of a low tone is slower than for a higher tone. And to distinguish an interaural phase difference at low frequencies requires similar degrees of additional time.

So the outcomes of this test may be simply explicable by the nature of the test signal chosen, ie a primarily high frequency transient would favour Blumlein since Blumlein creates the greatest interchannel amplitude difference. A held tone or sample of a voice or instrument may have resulted in a rather different outcome.

That’s one possible take anyway...

(I’m no enemy of Blumlein btw - just speculating on what’s happening here.)


If done properly, and I would assume Boyk and CalTech students know how, the click would be an impulse with all frequencies. Or all that the speaker could reproduce. Some room correction devices use clicks to measure speaker frequency response. They sound like short little high frequency clicks or pops, but have all frequencies.

As to our hearing getting directional info slower at low frequencies I don't think it works that way. At frequencies below 1500 hz or so the signal from the ear recreates the waveform to some extent, and above that it does something like a band analysis rather than sending the brain the waveform itself. Our acuity for timing differences is better below 1500 hz. At higher frequencies we don't hear them for the most part.

Other tests using musicians have been done and similar results are obtained in positional accuracy of various microphone techniques.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
However, going back to my OP, my point is not about what is desirable. My point was actually far more abstract, seeking to spark a discussion about the goals of audio recording and reproduction, which are often stated to be something along the lines of: "recreation of the original performance/acoustic event."
...

I was trying to demonstrate why, on a philosophical level, this goal is not achievable when, in recordings, we have sounds radiating from multiple sources within a reverberant space captured at a single point in space (the mic), and then reproduced out of a single source (or sources) in a listening space.

If you need applied evidence of this, think of the way an orchestra is typically mic'd for concert hall recordings. Normally, we don't simply use a Blumlein pair placed a head's width apart in the best seat in the house, even though it is the acoustic at precisely this point that we are trying to create in the recording. Most engineers agree that this is not the best way to "capture" the desired spatial cues.

Instead, we place mics in all sorts of very different locations (depending on the engineer). So we are not maintaining the hall's spatial and acoustic cues, and there is little acoustically natural about the placement of our mics. Quite the opposite: we are using creative microphone placement in an attempt to artificially generate these cues in a way that merely seems to reproduce the effect of being in the best seat in the house when played back on typical loudspeakers.

I totally agree with this approach by the way. It's a way of compromising for the inadequacies inherent in actually using the true spatial cues a listener in the hall would experience at the live performance (i.e. the spatial cues present in the best seat in the house).

So I think that at best, a fairly convincing and enjoyable, but very much artificial, effect is achieved.

I have several issues. First, in attempting to "recreate the original performance/event", I think we agree that mics don't just hear performers or reflected energy. Rather they hear a sample of the sound field in space created by those, just as we do in the audience. But, yes, the mics have directional pickup patterns that might not accurately simulate the pickup patterns of our ears. Omni mics, for example, are more omni than each of our ears are. And, so forth, with directional mics that may restrict sounds detected more than our ears do. Binaural doesn't really solve this, and it imposes a fixed, dummy head HRTF on the sound.

Are these big problems? We know recording engineers are well aware of these and other mic characteristics, like non-linear frequency response. So, they artfully try to select the right mics for the purpose based on experience, testing and seat of the pants. Some may succeed to a greater extent than others, but we cannot really know for sure. We are lost in Toole's circle of confusion, only able to make a subjective judgement about the recording based on whatever vague criteria that are locked in our heads. Did it leave some sonic cues out? Did it include sonic stimuli that would not have been heard live? Only in rare egregious cases that somehow were not edited out can we tell for sure.

So, we are not off to a good start in terms of certainty. But, let's assume we can tell and be satisfied when we think the engineer has done a good job.

The other issue I tried to answer earlier is that I believe you are overfocused on the single points - the mic, the speaker - and you may be ignoring the value of the array of mics and speakers in recording and playback. We don't listen to single speakers conveying what a mono mike picked up much any more. We listen to the sound field an array of mics picked up from multiple points simultaneously (subject to placement time delays) when played back through a corresponding array of 2 or more speakers.

Phantom imaging between L and R speakers seems to me to do a pretty decent job of conveying whether a single performer was hard L, hard R or any point in between. If it's a string quartet, we get a pretty good idea where the 1st and 2nd violins, viola and cello are in the soundstage while playing simultaneously. Things may get a bit more congested and complicated with a large orchestra, especially massed strings, but solo passages always seem to put the performers in the right places. And, all this with 2 stereo speakers reproducing the output from 2, but usually more mics. And, if it is a good recording, the phantom imaging simultaneously conveys a sense of depth behind the plane of the front speakers.

So, the whole is more than the sum of the two speakers because that array can convey much more than just L or R. It can also convey points in between for many instruments playing at the same time, including also depth cues from delays, reflections, etc. due to the interactions between the stereo channels. And, this all works because it records and plays back a fairly complete sample of the sound field created by the event, rather than the individual performers. It is not reproducing the event. It is reproducing a sample of the sound field created by the event, extensively using phantom imaging.

Does it sound artificial? Well, yes, stereo now sounds somewhat artificial to me because I have heard a lot of discrete Mch reproduction that includes much more of the reflected hall acoustic that is an inseparable part of what I hear naturally live. I no longer wonder why recordings don't sound more like the real thing live. I now know why. I also don't much think that my music sounds artificial at all. But, I think stereo is a kind of more benign artificiality in that something is omitted rather than that something more noticeable or unnatural is being imposed.

Yes, I know this stuff seems almost trivially basic, and it's nothing really new to you or anyone else here. But, I am going to this elementary level because I don't think I am truly understanding what you are seeking or your logic.
 
Top Bottom