• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Psychoacoustics is not the science of hearing.

It is important to understand that Psychoacoustics and Audiology are only related sciences to the only science of hearing - Neurophysiology.​

Physics/acoustic is the only science that describes physical stimuli on the eardrum.
Audiology is the only science that describes the sensory response to physical stimuli.
The brain processes the input signals sent from the ear's sensors and creates different forms of hearing in the brain.
Neurophysiology is the only science that describes the process of hearing.
Neuropsychology is one of the sciences that describes the responses to hearing.

Unfortunately, it is rare in research studies to have this clear differentiation between these four sciences and their various unique characteristics. This has over the decades created unclear and unfortunate confusion about how we hear.
Thank You for a very interesting post.

If we say that ASR's domain is the sound reproduction technology, could we postulate that within that domain, it is enough to use the stimuli on the eardrum as the measure of the outcome/performance of the evaluated technology, and argue that the effects related to audiology, and how the brain processes the input signals, could be put aside in this context ? If not, then what should be the criteria for "optimal performance of the sound reproduction system"?

Just to be clear: I'm definitely not saying that we should restrain the discussion on the physical stimuli alone. I'm just curious: If we think of the holistic picture of how we hear including neuropsychology, how would that change (or, if that would change) our evaluation of the technology we are interested about ?

Edit: I just noticed that Amir sort of answered this just before I posted my message...
 
The brain is the hub where almost all afferent (ingoing) impulses from all body sensors are truncated on the fly into mainly automatic responses through efferent (outgoing) nerves most unconsciously and a few conscious responses.
The brain is the master that creates new response algorithms based on previous experiences and new stimuli in their context. Physical sound stimulates the ear which stimulates the brain. The brain notes every sound stimulus and usually chooses to do nothing. Certain sound stimuli trigger some algorithm and usually create an efferent unconscious response. In a few cases, a conscious response is created. We consciously hear a sound that we can describe in a psychological response.
The brain acts as a central hub, processing almost all afferent (incoming) impulses from the body's sensory receptors. These impulses are typically converted into automatic responses through efferent (outgoing) nerves, primarily unconsciously, with a few conscious responses.
 
Psychoacoustics primarily deals with the "what," not "how" or "why." Experiments are created and conducted to detect audibility of various thresholds. What is observed is then used to design systems such as lossy audio codecs. In that regard, it is not important to know why something is happening. The famous Fletcher-Munson graphs were created using this type of testing.

There is some extension of the field, diving into ear and how it perceives acoustic excitement. In that regard, it has a bit of overlap with neurology. And some guesses as to central cortex being involved (e.g. the cocktail party effect or stream analysis). But again, this is not deep analysis.

Put succinctly, psychoacoustics gives us the knowledge to develop audio systems. It is a field of science. But distinct from those in the medical or broader science professions.
While the above is not completely incorrect, it gives a partial representation of the discipline of psychoacoustics. More broadly, psychoacoustics is the discipline that investigates the relationship between the acoustic properties of physical sound waves and the perceptual (sometimes also called “sensory”) properties of what we hear. Therefore, this discipline deals with detection, but also discrimination and categorization of the perceptual characteristics of auditory objects/events.

A review of the content of books by Brian Moore (An introduction to the psychology of hearing), Chris Plack (The sense of hearing), or William Yost (Fundamentals of hearing:An introduction), to name just a few, would give interested individuals a good way to appreciate the scope of the discipline. A review of these, and similar, books would also show that researchers who study psychoacoustics do use models and explanations, and do often (though not necessarily) use physiological knowledge to explain how one goes from physical sound waves to perceived sounds (note that often people use to the word “sound” to refer to either physical or perceptual phenomena; the ambiguous use of this word can be confusing).
As also mentioned by Amir, the field does have connections and applications that investigate hearing such as auditory neurophysiology, audiology, sound engineering and reproduction, and communication, to name some key related disciplines.
 
Psychoacoustics primarily deals with the "what," not "how" or "why." Experiments are created and conducted to detect audibility of various thresholds. What is observed is then used to design systems such as lossy audio codecs. In that regard, it is not important to know why something is happening. The famous Fletcher-Munson graphs were created using this type of testing.

There is some extension of the field, diving into ear and how it perceives acoustic excitement. In that regard, it has a bit of overlap with neurology. And some guesses as to central cortex being involved (e.g. the cocktail party effect or stream analysis). But again, this is not deep analysis.

Put succinctly, psychoacoustics gives us the knowledge to develop audio systems. It is a field of science. But distinct from those in the medical or broader science professions.
I asked ChatGDP.

The science of hearing is?

ChatGPT
The science of hearing encompasses several interdisciplinary fields that explore how sound is perceived, processed, and interpreted by the auditory system. Here’s a breakdown of the key areas involved in the science of hearing:

1. Physics of Sound
Sound Waves: Sound is a mechanical wave that travels through a medium (air, water, or solids) as vibrations. Key properties include frequency (pitch), amplitude (loudness), and wavelength.
Acoustics: The study of sound in terms of how it travels, reflects, and interacts with different environments. This includes understanding sound propagation, absorption, reflection, and diffraction.

2. Neurophysiology of Hearing
Auditory Transduction: The process by which sound waves are converted into neural signals. This occurs in the cochlea of the inner ear, where hair cells transduce mechanical vibrations into electrical signals.
Auditory Pathways: The neural pathways that carry auditory information from the cochlea to the brain, including the auditory nerve, brainstem nuclei, and auditory cortex.
Signal Processing: How the brain processes and interprets auditory information, including the encoding of sound frequency, intensity, and timing.

3. Neuropsychology of Hearing
Auditory Perception: How the brain interprets and makes sense of auditory information, including the perception of pitch, loudness, timbre, and spatial location of sounds.
Cognitive Processing: How auditory information is integrated with other sensory inputs and how it affects cognitive functions such as attention, memory, and language.
Hearing Disorders: The impact of hearing impairments on cognitive and emotional functions, and how the brain adapts to changes in auditory input.

4. Auditory Psychophysics
Perceptual Thresholds: Measurement of the smallest detectable sound level (absolute threshold) and the smallest detectable difference between sounds (difference threshold).
Loudness and Pitch: Studies on how variations in sound intensity and frequency affect our perception of loudness and pitch.

5. Audiology
Hearing Assessment: Methods and tools used to evaluate hearing ability, including audiometry, tympanometry, and otoacoustic emissions.
Hearing Aids and Cochlear Implants: Design, fitting, and use of devices that assist with hearing loss, including technology and its impact on sound quality and perception.

6. Acoustics and Room Acoustics
Environmental Acoustics: Study of how sound behaves in different environments, including the effects of room size, shape, and materials on sound quality.
Acoustic Treatment: Methods for controlling sound reflections, absorption, and diffusion in rooms to optimize listening conditions.

7. Psychoacoustics
Sound Perception: How psychological processes influence the perception of sound, including how we interpret complex auditory scenes and the effects of context and experience on auditory perception.

8. Ethology and Evolutionary Biology
Animal Hearing: Comparative studies on how different species perceive and use sound, providing insights into the evolutionary development of auditory systems.

Summary
The science of hearing is a multidisciplinary field that integrates aspects of physics, neurophysiology, neuropsychology, audiology, psychophysics, and acoustics. It involves studying how sound waves are generated, transmitted, and perceived, and how these processes are encoded and interpreted by the auditory system and brain. Understanding hearing from these various perspectives helps in developing technologies to aid those with hearing impairments, improving environmental acoustics, and advancing knowledge of auditory processing and perception.


The science of hearing is highly interdisciplinary. ChatGDP's response was vague and illogical. Rather a listing of various text findings without logical analysis.
Logically, there are three main sciences when it comes to hearing, namely physics, neurophysiology and neuropsychology. The other sciences are subsets of the three overall sciences.

Logically, there are three main sciences when it comes to hearing, namely physics, neurophysiology and neuropsychology. The other sciences are subsets of the three overall sciences.

ChatGDP is often criticized for lacking analytical capabilities which is confirmed here.
 
Last edited:
Something came to my mind regarding 4) and 7) and the subject of blind testing.
Can blind people hear more than sighted ? A famous example in HiFi was Angus McKenzie who was an expert on tape recording, I remember his many reviews of cassette tapes, cassette decks, his excellent article on the Decca Ring recording and his review of the ESL63. He was an early adopter of the superiority of digital recording as he was greatly irritated by the modulation noise of magnetic tape. I wonder if he heard differences in equipment of the kind which would generally be denied on ASR ?
 
Last edited:
I wonder if he heard differences in equipment of the kind which would generally be denied on ASR ?

If he didn't use controls, of course there would be doubts. No one is immune to bias.
 
If he didn't use controls, of course there would be doubts. No one is immune to bias.

The question of whether blind people can "hear more" than sighted individuals is complex and involves several factors, including sensory compensation and individual differences in perception.
  1. Sensory Compensation: Some studies suggest that, because blind individuals rely more on their other senses (particularly hearing), they may develop enhanced abilities in those senses. This phenomenon is often attributed to neuroplasticity, where the brain adapts to the loss of one sense by enhancing the others. For example, blind individuals may have better auditory skills, including the ability to detect slight variations in sound, localize sounds, or even interpret auditory information more effectively.
  2. Training and Experience: Blind individuals may also have training and experience that allow them to use their auditory senses more effectively. Techniques developed in orientation and mobility training often rely heavily on sound, which can lead to improved listening skills.
  3. Perception of Sound: Blind individuals may perceive sound differently, often focusing on nuances and details that sighted individuals may overlook. This does not necessarily mean they hear more sounds, but rather that their interpretation of sounds can be more acute.
  4. Individual Variability: Just like sighted individuals, hearing abilities among blind individuals can vary widely due to genetics, environment, and personal experiences.
In summary, while many blind people may develop enhanced auditory skills, it is not correct to say universally that they hear "more" than sighted individuals. Their abilities may be different, and often such comparisons depend on advanced training and experience rather than an inherent advantage in hearing.
 
Blind people can certainly train their other senses to be more sensitive than average. The next time you go into a lift, look for buttons with Braille lettering. Touch them and see if you can tell what the pattern of dots are. Then watch this video of this blind person reading Harry Potter and be amazed:


All senses can be trained. I have many other examples.
 
The question of whether blind people can "hear more" than sighted individuals is complex and involves several factors, including sensory compensation and individual differences in perception.
  1. Sensory Compensation: Some studies suggest that, because blind individuals rely more on their other senses (particularly hearing), they may develop enhanced abilities in those senses. This phenomenon is often attributed to neuroplasticity, where the brain adapts to the loss of one sense by enhancing the others. For example, blind individuals may have better auditory skills, including the ability to detect slight variations in sound, localize sounds, or even interpret auditory information more effectively.
  2. Training and Experience: Blind individuals may also have training and experience that allow them to use their auditory senses more effectively. Techniques developed in orientation and mobility training often rely heavily on sound, which can lead to improved listening skills.
  3. Perception of Sound: Blind individuals may perceive sound differently, often focusing on nuances and details that sighted individuals may overlook. This does not necessarily mean they hear more sounds, but rather that their interpretation of sounds can be more acute.
  4. Individual Variability: Just like sighted individuals, hearing abilities among blind individuals can vary widely due to genetics, environment, and personal experiences.
In summary, while many blind people may develop enhanced auditory skills, it is not correct to say universally that they hear "more" than sighted individuals. Their abilities may be different, and often such comparisons depend on advanced training and experience rather than an inherent advantage in hearing.
Echolocation


Blinds echolocation reveals the importance of reflexes in rooms. Blind people can use echolocation to cycle in traffic. Higher frequencies are crucial for creating a sound image in real time. Through brain placidity areas of the brain, which are normally used for visual impressions in the sighted, are used to create a sequential 3D sound image of the surroundings in the blind human.


Just like bats, seeing and blind humans can use echolokcation


The importance of correct reflexes is underestimated.
Gelenec believes that all reflexes (above 1000 Hz), mainly laterally, must not arrive before 20 ms in sound studios relative to the direct sound. Reflexes over 20 ms are then usually attenuated by approx. 8 dB and add a spatially positive effect. Reflexes shorter than 20 ms and with a loudness greater than - 8 dB have a negativ spatially effect. Short reflexes add masking and perceptual distorting effects.

Echo localization is best within 3 m in trained sighted people and slightly further away in blind people.
Over about 20 ms, echolocation has largely ceased. Around 50 ms perceives the reflex of speech as a new separate sound - an echo. This means that in the interval between 20 to 50 ms, the reflex is experienced as a positive addition to the direct sound. Between 20 - 50 ms there is no exact spatial preception of the room as in echolocation.

The listening tests for Tools/Olive's spinorama in Harman's special listening room have a lateral delay of about 20 ms and an attenuation of about 8 dB.
This is not a coincidence as Toole/Olive published an article with a review of the publications in the area and that they carried out their own studies that confirmed the optimal time and attenuation of reflections in rooms.
Unfortunately, few meet these variables.
 
Last edited:
Something came to my mind regarding 4) and 7) and the subject of blind testing.
Can blind people hear more than sighted ? A famous example in HiFi was Angus McKenzie who was an expert on tape recording, I remember his many reviews of cassette tapes, cassette decks, his excellent article on the Decca Ring recording and his review of the ESL63. He was an early adopter of the superiority of digital recording as he was greatly irritated by the modulation noise of magnetic tape. I wonder if he heard differences in equipment of the kind which would generally be denied on ASR ?
I'd argue differences were easier to hear on cassette tapes given their flawed nature and the challenges in setting things up correctly, i.e. right signal levels, tailoring stuff to the tape (if you could), and overall calibration levels and compatibility. As evidenced by commercial CCs often being utter garbage.

I think the digital age nullified the need for the expertise to record compact cassettes with educated skill - which made a *huge* and extremely hearable difference with cassette players. I used to be very good at it, but I am super glad I don't have to deal with that anymore.
 
Back
Top Bottom