• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How much audio processing does the brain do?

HemiRick

Active Member
Joined
Jan 2, 2020
Messages
133
Likes
150
It's well proven that our brains do a huge amount of processing to the visual images we perceive. I,E. what your eyes see is NOT what you see. For example the image our eyes produce is inverted and the brain flips it. {Wear glasses that do this flip for a couple days and your brain will flip the image again!}

My question is how much audio processing does the brain do? Is what our ears perceive what we actually think we hear?

Rick
 

audiophile

Active Member
Joined
Oct 7, 2019
Messages
177
Likes
140
Record your audio system by placing a microphone at the listening position. Listen to the recording, and compare to what you actually hear from the same position. There will be a huge difference, since the mike will capture absolutely everything, including all the wall reflections, but our brain filters that information out, and focuses on what’s important.

Another test: listen to any song and notice that you are unable to pay attention to high hats, vocals and bass in any detail at the same time. That’s because the bandwidth of our conscious perception is very limited, it can’t analyze all the information captured by the ears. Also you may notice that it is very difficult to keep your focus only on high hats for example for the whole duration of a song, because our brain find repeating information boring, it wants to hear new stuff all the time.

Ears are the EQ and brain is the most powerful DSP that works non-stop.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
There was an experiment similar to the inverted glasses one in audio. They were investigating directional perception especially vertically, and how it related to the shape of the pinna in the ear. They molded inserts into some volunteers after testing directional perception. And found upon retesting the directional perception was greatly altered and very poor. They had these volunteers wear these inserts for a few weeks.

Upon retesting, they found directional perception was just as good as initial testing while wearing the inserts. The brain had learned to process the new shape correctly. The surprising part was upon removal of the inserts directional perception was only disrupted a short period of time. I seem to recall a few hours. The directional acuity returned very quickly as if a template of the previous version was stored somewhere.
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,273
Likes
9,790
Location
NYC
It's well proven that our brains do a huge amount of processing to the visual images we perceive. I,E. what your eyes see is NOT what you see. For example the image our eyes produce is inverted and the brain flips it. {Wear glasses that do this flip for a couple days and your brain will flip the image again!}

My question is how much audio processing does the brain do? Is what our ears perceive what we actually think we hear?

Rick
In principle, it is similar. Both systems analyze the input and process selected features/parameters before, and in the context of prior experience, they are recombined into personally meaningful percepts. As with vision and the other senses, this process is managed through a hierarchy of locations which also distribute the information more widely.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,687
Likes
4,068
A big difference is that your speakers won't stop playing a song because they don't like it. You and your brain can decide to push "stop".
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,494
Odd question, as if to imply reality perception or sensations have any possibility of occuring without a brain..
 

Wes

Major Contributor
Forum Donor
Joined
Dec 5, 2019
Messages
3,843
Likes
3,788
OP's 1st question is not well defined.

Answer to 2nd question is No
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
20,752
Likes
20,766
Location
Canada
Record your audio system by placing a microphone at the listening position. Listen to the recording, and compare to what you actually hear from the same position. There will be a huge difference, since the mike will capture absolutely everything, including all the wall reflections, but our brain filters that information out, and focuses on what’s important.

Another test: listen to any song and notice that you are unable to pay attention to high hats, vocals and bass in any detail at the same time. That’s because the bandwidth of our conscious perception is very limited, it can’t analyze all the information captured by the ears. Also you may notice that it is very difficult to keep your focus only on high hats for example for the whole duration of a song, because our brain find repeating information boring, it wants to hear new stuff all the time.

Ears are the EQ and brain is the most powerful DSP that works non-stop.
Funny you mentioned that because I'm interested in the Sennheiser claims about this. is this related?
QUOTE>
"The enhanced sound reproduction of the HD 800 S is achieved through the addition of the innovative absorber technology that was pioneered in the Sennheiser IE 800 – a breakthrough that preserved the audibility of very high frequency sounds by eliminating a phenomenon known as the “masking effect”, where the human ear struggles to hear frequencies of sound when lower frequencies of a higher volume occur at the same time. By absorbing the energy of the resonance, Sennheiser’s patented absorber technology prevents any unwanted peaks and allows all frequency components – even the finest nuances – in the music material to become audible."
https://en-ca.sennheiser.com/high-resolution-headphones-3d-audio-hd-800-s
 
OP
HemiRick

HemiRick

Active Member
Joined
Jan 2, 2020
Messages
133
Likes
150
Odd question, as if to imply reality perception or sensations have any possibility of occuring without a brain..

Not implying that at all. A brain is required, the question is how much does it modify the signal presented to it by your ears. Then a secondary question evolves: What are the results and effects of the modification? Its seems this has much more studied in the visual realm than the audio.
 

cjfrbw

Senior Member
Joined
Mar 5, 2018
Messages
410
Likes
472
I read some waggish comment from a scientist who stated that the organs of perception in humans aren't that great, but the processing power of the brain is awesome. I think he attributed 10 percent to information gathering and 90 percent to processing, although I have no idea how one would arrive at such a ratio.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
Funny you mentioned that because I'm interested in the Sennheiser claims about this. is this related?
QUOTE>
"The enhanced sound reproduction of the HD 800 S is achieved through the addition of the innovative absorber technology that was pioneered in the Sennheiser IE 800 – a breakthrough that preserved the audibility of very high frequency sounds by eliminating a phenomenon known as the “masking effect”, where the human ear struggles to hear frequencies of sound when lower frequencies of a higher volume occur at the same time. By absorbing the energy of the resonance, Sennheiser’s patented absorber technology prevents any unwanted peaks and allows all frequency components – even the finest nuances – in the music material to become audible."
https://en-ca.sennheiser.com/high-resolution-headphones-3d-audio-hd-800-s

Sennheiser had a problem with the 6kHz peak and created a solution. It is always nice to have a BS commercial story at hand.
It is just a fix for the effects of the hole in the ring driver.
 

Hipper

Addicted to Fun and Learning
Joined
Jun 16, 2019
Messages
753
Likes
625
Location
Herts., England
Funny you mentioned that because I'm interested in the Sennheiser claims about this. is this related?
QUOTE>
"The enhanced sound reproduction of the HD 800 S is achieved through the addition of the innovative absorber technology that was pioneered in the Sennheiser IE 800 – a breakthrough that preserved the audibility of very high frequency sounds by eliminating a phenomenon known as the “masking effect”, where the human ear struggles to hear frequencies of sound when lower frequencies of a higher volume occur at the same time. By absorbing the energy of the resonance, Sennheiser’s patented absorber technology prevents any unwanted peaks and allows all frequency components – even the finest nuances – in the music material to become audible."
https://en-ca.sennheiser.com/high-resolution-headphones-3d-audio-hd-800-s

Sounds similar to employing bass traps in a room. Good use of bass traps not only improves bass but allows the mids and highs to be 'revealed'.

The brain is plastic in that it adapts to new situations - it learns. I've always assumed this phenomenon is behind the concept of 'burn-in' where it is said that it takes time for new gear to adapt to being played whereas in fact it is our brain taking time to adapt to the new gear. It is also not consistent or infallible (like optical illusions) and that is why we need double blind tests etc..

Take the phantom image. It is not a real physical image of the sound in the room but a creation of the brain. It arises from the two sources of sound. If you place some physical object where the sound appears to be it won't affect what you hear. Actually it might in the sense that you see it and that affects your overall perception and that would explain why watching a video of a band playing, even if the television speakers are not that capable, can sound to us pretty good.

Stereo images are similar - when you get two almost identical images lined up correctly a 3D image suddenly appears in view. It's quite remarkable looking through a stereoscope at WW2 warship images - the masts come up and poke you in the eye!

There was an interesting link posted on another thread by Pozz that you might like:


You probably need to look at a book on Psychoacoustics. I have this:

Acoustics and Psychoacoustics - David Howard

It's tough but insightful. The third edition which I have came with a CD full of interesting examples. I'm not sure if it accompanies the later editions.
 

Frank Dernie

Master Contributor
Forum Donor
Joined
Mar 24, 2016
Messages
6,445
Likes
15,781
Location
Oxfordshire
Record your audio system by placing a microphone at the listening position. Listen to the recording, and compare to what you actually hear from the same position. There will be a huge difference, since the mike will capture absolutely everything, including all the wall reflections, but our brain filters that information out, and focuses on what’s important.

Another test: listen to any song and notice that you are unable to pay attention to high hats, vocals and bass in any detail at the same time. That’s because the bandwidth of our conscious perception is very limited, it can’t analyze all the information captured by the ears. Also you may notice that it is very difficult to keep your focus only on high hats for example for the whole duration of a song, because our brain find repeating information boring, it wants to hear new stuff all the time.

Ears are the EQ and brain is the most powerful DSP that works non-stop.
Quite so.
It shocked me how much moving a microphone changed the recording when I first started making recordings in the 1960s.
Moving my head to where the microphone pickup sounded different didn't change the sound I was perceiving by anywhere near the amount of the recording change.
In fact my favourites (sounding most realistic to me) are ones where all the adjustments to balance were done using microphone position with no subsequent dicking at all.
I suspect that in the same way our brain includes our actual pinna effect it quickly maps the room we have just entered and compensates for most of its shortcomings.
Yes moving a microphone to different positions changes the output we hear on a recording or measure, but no we don't notice it anywhere near as much.
We hear through the room automatically IME to a very large extent.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
One suggestion for choosing microphone position is to plug one of the ears with a finger and listen with only one. At least some of the processing of the room is reduced that way. Giving you a better idea of what the mic position is going to sound like.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,494
Not implying that at all. A brain is required, the question is how much does it modify the signal presented to it by your ears. Then a secondary question evolves: What are the results and effects of the modification? Its seems this has much more studied in the visual realm than the audio.

I'm not understanding what metrics we're supposed to be providing simply when you say: "how much does it modify the signal presented to it by your ears".

The signal doesn't seem to be modified, as there isn't a signal. It's not like you're implanting some sort of diodes directly to your brain and sending electrical impulses in some algorithmic state where you're now hearing something that isn't presenting in soundwaves to the external world.
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,273
Likes
9,790
Location
NYC
Not implying that at all. A brain is required, the question is how much does it modify the signal presented to it by your ears.
The ears modify the signal before it even gets to the brain. The process is based on feature detection following by association and perception.
Then a secondary question evolves: What are the results and effects of the modification?
Different signalling pattern in different parts of the brain.
Its seems this has much more studied in the visual realm than the audio.
Perhaps and there are many possible reasons for this. However, the perception of all sensory modalities, including auditory and visual, seems to follow parallel processes and research on each informs on the others.
 

STUDIO51

Member
Joined
Jun 19, 2019
Messages
93
Likes
256
Location
Seoul Republic of KOREA
The processing of the brain is important for the information coming into the ear, just like the eye. Such as HRTF (Head-related transfer function).

Simply explain : Each person has different factors that affect the sound coming into the ear, such as the shape of the ear, ear to ear distance, head size, and shoulders etc. These make different sounds for each person's ear, even if they hear the same sound. But the individual's brain compensates for this effect by counteracting it.

pepple can also determine the approximate sound location with just single ear. The brain detects the direction by detecting a change in the high frequency that occurs when the direction changes.
 

Cahudson42

Major Contributor
Joined
Sep 21, 2019
Messages
1,083
Likes
1,556
I assume there have been experiments that have 'Mapped' relatively low-bandwidth auditory input - say Beethoven's 7th - to higher bandwidth visual perception?
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,078
Likes
8,914
None whatsoever. Garbage in, garbage out...LOL
 
Top Bottom