• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Biomechanics of Human Hearing

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
Introduction

Biomechanics is the science of movement of a living body that incorporates the fields of classical mechanics and biology; wherein the study of muscles, bones, tendons and ligaments are investigated specifically with respect to how they work in unison to affect the mechanics of movement. The science of biomechanics also includes the study of various body functions, to include blood circulation and renal function.

The application of engineering mechanics principles to living organisms is at the heart of biomechanics; and these principles can be studied at the cellular level, tissue level, or the whole-joint level. Clearly, Newtonian forces affect biological systems, and therefore biomechanics studies kinetics (forces that cause movements) and kinematics (human movement) attributable to such forces.

Human hearing is a sensory event and therefore human perceptions are inherent in the hearing process. Clearly, not only the ears, but the nerves and brain are part of how humans perceive sound (interpret sound waves). The cochlea, a component of the inner ear, contains the cells responsible for converting sound to electrical impulses that allow for human perception.

The ear is the organ of hearing and with respect to mammals (including humans) accounts for balance. The human ear is a complex biomechanical system consisting of three distinct biomechanical entities:
  • outer ear: consisting of the pinna (visible part of the ear) and the ear canal; that focuses sound energy on the eardrum.
  • middle ear: formed by three ossicles (very small bones), within a small cavity (tympanic cavity), that transmit vibrations from the eardrum and amplify the sound waves directed to the inner ear;
  • inner ear: a thin diaphragm (oval window) and a maze of tubes and passages (labyrinth) that contain the vestibular and the cochlea, a transducer that converts the mechanical sound energy to electrical impulses.
Humans (and other mammals) have a coiled form of cochlea often called a mammalian cochlea. The outer hair cells of a mammalian cochlea provide enhanced sensitivity and frequency resolution. The auditory nerve is a bundle of nerve fibers that carry information between the cochlea and the human brain. Hair fibers within the cochlea are connected to the auditory nerve; various hair fibers can be put into motion dependent upon the nature of movements within the cochlear fluid. The human brain analyzes data received from the ears via these auditory nerve fibers; and performs best when it receives good information from healthy ears.

Tympanic Membrane

The tympanic membrane, also called the eardrum, is the thin semi-transparent three-layered membrane in the human ear that receives vibrations (sound) from the outer ear and transmits these vibrations to the auditory ossicles within the tympanic cavity. The eardrum is cone-shaped and forms the boundary between the outer ear and the middle ear; and is comprised of three tissue layers:
  • outer cutaneous layer;
  • fibrous middle layer;
  • mucous membrane on the innermost surface.
The tympanic membrane, most of which is under tension, exhibits piston-like motion in the sound transmission process, transferring sound pressure waves into the cochlea; however at higher frequencies this motion is more complicated than simple piston-like behavior.

Techniques such as holography (recording of wave interference patterns via diffraction) have produced three-dimensional light fields with images depicting the complex vibration patterns of the tympanic membrane at high frequencies.

Young’s modulus (modulus of elasticity) is a measure of the ability of a material to withstand changes in length while under lengthwise compression or tension. The bending stiffness of the eardrum is dependent upon both its Young’s modulus and its thickness.

Basilar Membrane

The mammalian cochlea has a whorled structure, like the shell of a snail, and contains receptors that allow for transduction of mechanical waves into electrical signals. The basilar membrane is the primary mechanical element of the inner ear that serves as a mechanical analyzer within the length of the cochlea, curling toward the center. Within the basilar membrane mass and stiffness properties vary in accordance with the membrane length. The vibration patterns of the basilar membrane separate incoming sound into component frequencies that activate different cochlear regions.

The mechanical properties of the basilar membrane vary with length; as it is thicker, narrower and taut where the cochlea is largest; and thinner, broader, and less taut near the apex of the whorl (where the cochlea is smallest).
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,294
Likes
9,851
Location
NYC
middle ear: formed by three ossicles (very small bones), within a small cavity (tympanic cavity), that transmit vibrations from the eardrum and amplify the sound waves directed to the inner ear;
They also perform an impedance transformation for efficiently conveying the mechanical energy from air to fluid (endolymph) media.
The basilar membrane is the primary mechanical element of the inner ear that serves as a mechanical analyzer within the length of the cochlea, curling toward the center.
Note that it is non-linear with poorer frequency/map resolution at lower frequencies. The compensation for this is that the lower frequencies can be directly coded into pulse-trains. That mechanism cannot work for higher frequencies because neurons cannot fire fast enough. Thus, the higher resolution of the basilar membrane mapping serves well there.
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
I hope to have the next post ready, sometime next week. Very interesting stuff, but not that easy to write about.
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
External Ear Acoustics

Introduction

When humans listen to sound, the external ear (pinna and the ear canal) acts as a coupler between sound waves and the middle ear. External ear acoustics is the branch of psychoacoustics that studies this coupling in both a theoretical and applied manner. The physical dimensions of the external ear vary among individuals; and therefore so do the response properties.

The characteristics of the sound reaching the middle ear are influenced by the aforementioned physical dimensions and response properties of the external ear. Understanding the coupling process of the external ear requires knowledge of basic acoustics and the transformation properties of the external ear.

Ear canal measurements are taken by scientists to help determine the acoustic characteristics of the external ear. The most common measurements have been that of the pressure difference between the sound pressure at the tympanic membrane (PT) and the sound pressure within the sound field (PSF).

A potential obstacle during ear canal measurement is the considerable variability in the acoustic response properties of the external ear across individuals. The study of external ear acoustics includes mathematical and physical models.

There are two primary factors governing the sound transformation to the tympanic membrane (eardrum):

• human head, torso, and pinna flange acting as diffracting bodies: and
• concha and the ear canal acting as resonators.

The pressure distribution in the ear canal across frequencies varies considerably; however, the most pronounced pressure gain occurs in the region of 2.0 to 4 .0 kHz (10-15 dB) with a maximum increment of 17 to 22 dB at 3.0 kHz.

The relationship between the external ear and the pressure gain was extensively investigated by Shaw and Teranishi (1968 through 1974). Within their body of work, a physical model was developed that replicated the external ear with cylindrical cavities; thereby simulating the physical dimensions of the concha, pinna flange, and ear canal to evaluate resonance properties. Shaw (1974) presented a classic description of the separate contributions of these structures that also included the torso, neck and head.

The contribution of each structure is based on the interaction between size of the structure and the wave length of the sound. Sound pressure gains of 5 dB or less at frequencies below 1.0 kHz were attributed mostly to the torso neck and head. The ear canal was the single greatest contributor at frequencies between 1.0 kHz and 3.0 kHz where the gain in sound pressure can be as high as 20 dB. At frequencies above 3.0 kHz, small structures, such as the concha, provide the pressure gain. The sound reaching the tympanic membrane reflects the cumulative effect of the pressure gain attained by individual components and is reported to be 15 to 20 dB between 1.5 to 7.0 kHz.

The external ear filters the lower frequencies and amplifies mid frequencies to improve signal-to-noise ratio (SNR). The majority of the resonance effect has been attributed to the contribution of the ear canal; essentially a tube that is open at the concha region and closed at the other end (tympanic membrane). The air inside this tube (ear canal) acts as a resonating body. When the frequency of the sound matches the natural resonant frequency of the tube, the sound pressure is enhanced at that frequency.

The natural resonant frequency of the ear canal is four times its length; thus only one-quarter of the wave can fit into the ear canal during any given pass. Therefore, the ear canal is called a quarter-wave resonator. With respect to lower frequencies, the quarter-wave length is longer than the length of the ear canal; and the opposite is true for higher frequencies.

Theoretically, one can calculate the resonant frequency of the ear canal by the formula f = c/(4*1), in which f = resonant frequency of the ear canal, c = velocity of sound in air (34,400 cm/s), and 1 = length of the ear canal (25 mm).
 

JustAnandaDourEyedDude

Addicted to Fun and Learning
Joined
Apr 29, 2020
Messages
518
Likes
820
Location
USA
The natural resonant frequency of the ear canal is four times its length;
The natural resonant wavelength of the ear canal is four times its length;
[The frequency depends also on the speed of sound c, as seen in the formula in your following para.]
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
External Ear Acoustics

Domains of Study

External ear acoustics is studied both in the time domain and the frequency domain. The head acts as a filter that transforms an incoming sound wave.

The human body transforms incoming sound in a direction-dependent manner. For any direction (θ, φ) relative to the head, consider the planar impulse function δ(r−v(t−to)) which leaves from a position a distance r away at initial time to. The head, ear, and shoulders deform this sound wave by a combination of shadowing and reflections. When there is an impulse from direction (θ, φ), the sound pressure wave measured within the head is a function h(t; θ, φ). This function of three variables is known as the head related impulse response (HRIR) function.

For a general incoming sound wave I(t; φ, θ) arriving at the left ear from direction (θ, φ), this incoming sound wave would be transformed by the ear to: Il(t; θ, φ) = hl(t; θ, φ) ∗ I(t; φ, θ) . Similarly, if the right ear were at the origin, then the sound pressure function measured at the right ear would be Ir(t) = hr(φ, θ) ∗ I(t; φ, θ).

Clearly, the left and right ear are at different locations in space, so the hl and hr must be suitably shifted in time relative to each other. Additionally, the above analysis assumes a single source direction (φ, θ). If there were multiple sound sources in different directions, then a sum of the different φ, θ would be required for analysis.

The functions hl and hr vary from person to person; as they depend on the shape of the individual’s head, ear and shoulders. Typically, for any single person, the hl is a mirror reflection of hr, at approximately the (medial) plane of symmetry of the person’s body.

A tiny microphone within a person’s auditory canal is used to measure the HRIR function by recording the sound produced by an impulse function and then repeating the experiment for many different directions. Mannequins are also used for scientific study of HRIR functions.

A previous equation related the sound measured at the left ear to the sound produced by the source as:

Il(t; θ, φ) = hl(t; θ, φ) ∗ I(t; φ, θ)

The Fourier transform can be taken and the convolution theorem applied to yield:

ˆIl(ω; θ, φ) = hˆ l(ω; θ, φ) ˆI(ω; φ, θ)

where hˆ l(ω; θ, φ) is called the head related transfer function (HTRF). Different source directions can yield different HTRFs. HTRFs are useful for evaluating how the human head affects the various frequency components of a source from a given direction.

The HRTF is a complex number which specifies a gain and phase shift for each frequency ω of the incoming sound wave. The gain and phase shift correspond to the net effect of shadowing of the head, reflections off the shoulder and pinna, and any attenuation or amplification within the auditory canal.
 
Last edited:

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,294
Likes
9,851
Location
NYC
Here are some graphic representations of measured human HRTF.
HRTF.JPG
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
Middle Ear Processing

Introduction

The middle ear, incorporating an air-filled cavity behind the eardrum, is profoundly complex and efficient with respect to transmitting sound from the external ear to the oval window opening of the inner ear. This transmission occurs via vibrations of a rigid chain of three small movable bones called ossicles that serve as a mechanical lever system.

Sound waves from the external ear that reach the eardrum are converted to acoustic vibrations within the middle ear. These vibrations of the tympanic membrane (eardrum) drive the ossicles that convey the acoustic signal to a second membrane, the oval window, which subsequently transmits the signal to an incompressible fluid within the cochlea of the inner ear.

The middle ear system is extremely sensitive; as sounds of very low intensity are efficiently conveyed from the eardrum to the cochlea. The threshold of human hearing at 1 kHz is near a sound pressure level (SPL) of 0 dB; corresponding to a motion of the stapes of around 1 pm. Additionally, relatively little acoustic power (<3 dB) is lost in the process.

Retrograde signals, are acoustic signals transmitted in the reverse direction, from the oval window to the eardrum. There is little power loss with respect to these retrograde signals; thus low-level vibrations generated in the cochlea by nonlinear motions of the outer hair cells can be measured in the ear canal. These low-level signals are otoacoustic emissions, and measurement of these emissions are an essential technique for studying the biomechanical process of hearing.

The middle ear is incredibly robust; extremely high intensity sounds (on the order of 120 dB SPL) will not cause damage. The inner ear, in contrast, is subject to substantial damage from sounds of such intensity.

The middle ear serves an effective protective mechanism in the form of the acoustic reflex, which helps protect the cochlea from intense low-frequency vibration that can be transmitted to the ear by bone conduction. Acoustic reflex also protects the inner ear from intense sound, but only to a limited extent; it is too slow to protect the inner ear from intense sounds of short duration.

When acoustic pressure waves travel through the ear canal (without obstacles such as ear wax) acoustic power is conveyed unimpeded to the eardrum. The eardrum then efficiently (with respect to frequency) conducts the acoustic power into the middle ear. Only a portion of the incident power that reaches the eardrum enters the middle ear: as power is also reflected back into the ear canal. The reflected power exhibits a retrograde (backward-moving) pressure wave in the ear canal. Power reflectance is defined as the percentage of incident power reflected back into the ear canal. Power reflectance varies as a function of frequency and depends on how the acoustic impedance of the eardrum varies with frequency.

The term "impedance of the middle ear" appears in the literature; however it is a bit of a misnomer; since impedance is measured at the location of a microphone. The impedance of the eardrum could be inferred if the precise distance from the microphone to the eardrum was a known quantity. However, in practical situations it is not possible to know this displacement because the eardrum is at an angle (length undefined).

Clearly, the mechanical load on the eardrum is attributable to the middle ear. However, when the term "the impedance of the middle ear," appears in the literature, it most likely means the ear-canal impedance at the microphone location. This (of course) is a delayed version of the drum impedance that includes the impedance load of the middle ear.
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,294
Likes
9,851
Location
NYC
I have never heard the term "impedance of the middle ear" and, I agree, it is not a useful one. However, the major role (imho) of the middle ear is its function as an impedance transformer conveying the air-borne energy of the external world and external ear to the incompressible fluid medium of the endolymph on the other side of the oval window. Generally, passing mechanical energy from the air to a fluid is very lossy and is easily experienced by listening to ambient sounds at the beach. Notice how much quieter they are when one submerges below the surface of the water.

The mechanical linkage of the middle ear is passive so it cannot "amplify" the energy but the middle ear can optimize the transfer by two mechanisms. One is the result of the construction of the middle ear bones as a lever mechanism and the other is the result of the area difference between the tympanic membrane and that of the oval window.
Middle Ear.JPG


The consequence of these mechanisms is that the HRTF defined by the head and external ear is further transformed by the middle ear. Here are graphic representations of the contributions of these operations. Taken from: Oo, Nay & Gan, Woon-Seng & Hawksford, Malcolm. (2011). Perceptually-Motivated Objective Grading of Nonlinear Processing in Virtual-Bass Systems. AES: Journal of the Audio Engineering Society. 59. 804-824.
Middle Ear Function.JPG
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
The mechanical linkage of the middle ear is passive so it cannot "amplify" the energy but the middle ear can optimize the transfer by two mechanisms. One is the result of the construction of the middle ear bones as a lever mechanism and the other is the result of the area difference between the tympanic membrane and that of the oval window.
Thanks for this fantastic post !
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,294
Likes
9,851
Location
NYC
maybe we should go back to a lateral line system
That doesn't suit our life-style..........at least, not yet.
Lol! Definitely an old design. No new work on mammal designs from the big guy upstairs for decades now, as far as I know.
Yes but the world has changed (and we did it) to the point that the mechanism isn't up to protecting us from man-made noises.
 

Wes

Major Contributor
Forum Donor
Joined
Dec 5, 2019
Messages
3,843
Likes
3,790
well, we need better noise control

if enough people move out of the big cities, then...
 

Wes

Major Contributor
Forum Donor
Joined
Dec 5, 2019
Messages
3,843
Likes
3,790
Lol! Definitely an old design. No new work on mammal designs from the big guy upstairs for decades now, as far as I know.

well, seals can detect low freqs. using their vibrissae - jealous?

you could save a lot of $$ on big subs that way

then there is this:

https://www.nature.com/articles/s41467-019-10768-y#Abs1

Evolution (or at least selection) hasn't stopped - despite the claims of stasis in sponges
 

onion

Senior Member
Joined
Mar 5, 2019
Messages
342
Likes
383
I'd add something on localisation (the ear-brain using interaural time difference and inter-aural level difference) which is what allows us to hear the world in 3d.
 
OP
A

AudioStudies

Addicted to Fun and Learning
Joined
May 3, 2020
Messages
718
Likes
400
I'd add something on localisation (the ear-brain using interaural time difference and inter-aural level difference) which is what allows us to hear the world in 3d.
Thanks, I have been reviewing the literature on those topics. Hopefully, I can find time to prepare a document soon.
 
Top Bottom