• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How to achieve height sound effect on a desktop 2.1 system without Dolby Atmos? Simulate a cathedral-like or concert hall height.

Status
Not open for further replies.

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
https://en.wikipedia.org/wiki/Sound_localization#3D_para-virtualization_stereo_system

Research04_img_00L.jpg
  • The representatives of this kind of system are SRS Audio Sandbox, Spatializer Audio Lab and Qsound Qxpander.[22]
  • They use HRTF to simulate the received acoustic signals at the ears from different directions with common binary-channel stereo reproduction. Therefore, they can simulate reflected sound waves and improve subjective sense of space and envelopment.
  • Since they are para-virtualization stereo systems, the major goal of them is to simulate stereo sound information.
  • Traditional stereo systems use sensors that are quite different from human ears. Although those sensors can receive the acoustic information from different directions, they do not have the same frequency response of human auditory system.
  • Therefore, when binary-channel mode is applied, human auditory systems still cannot feel the 3D sound effect field.
  • However, the 3D para-virtualization stereo system overcome such disadvantages.
  • It uses HRTF principles to glean acoustic information from the original sound field then produce a lively 3D sound field through common earphones or speakers.

https://en.wikipedia.org/wiki/Sound_localization#Multichannel_stereo_virtual_reproduction
Multichannel stereo virtual reproduction

Since the multichannel stereo systems require many reproduction channels, some researchers adopted the HRTF simulation technologies to reduce the number of reproduction channels.[22] They use only two speakers to simulate multiple speakers in a multichannel system.
  • This process is called as virtual reproduction.
  • Essentially, such approach uses
  • Unfortunately, this kind of approach cannot perfectly substitute the traditional multichannel stereo system,
    • such as 5.1/7.1 surround sound system.
    • That is because when the listening zone is relatively larger,
      • simulation reproduction through HRTFs may cause invert acoustic images at symmetric positions.

  • The idea is to stack two columns (2.2) of active speakers in a "line-array-like" fashion.
    • Each column contains multiple (two (2.2.2) or three (2.2.3)) speakers to achieve a pronounced sense of spacial height.
        • 8000-400-003-100x100.jpg
    • https://en.wikipedia.org/wiki/Head-related_transfer_function (Apple Spatial Audio)
      • FR of both ears, rather than
        • one microphone for
        • one speaker
          • enclosing multiple drivers,
          • possibly ported from rear near a wall.
        • What I hear is not what the manufacturer measures.
      • A simple model of ears.
        • 3D model of biomechanics of ear
        • [*]https://assets.ctfassets.net/4zjnzn...8a62/genelec_the_ones_brochure_2019_web_0.pdf
          • Our ears receive vastly more information than we can actually perceive. All senses are filtered this way, and bring conscious awareness of only a tiny fraction of the data registered by the body. Hearing is our most precise sense, timing-wise, involving substantial “pre-processing” in the brainstem and several reflexes ahead of conscious recognition.
            The outer ears are sophisticated entry points that are needed to identify the direction from which sounds arrive, but we’re not just carrying around two very personal, directional microphones. Our ears and brain work together in a continuous feedback loop with an abundance of nerve impulses going back and forth to fine-tune reception in the middle and inner ear, over
            a range of 60 dB. We also use head movements to reach out for detail, such as discriminating between direct sound and reflections in a room, and the brain’s left/right ear comparisons rely on the most energy-consuming nerve synapses of the body.
            Therefore, only a trained listener can perceive the finer nuances of auditory sensation. Musicians and audio professionals learn to attain a heightened awareness of imaging, pitch, spectral balance, transients and other qualities. The acuity of a trained listener cannot be overestimated, and that is the type of user for which THE ONES have been designed.
          [*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*][*]
  • I am not interested in a Dolby Atoms 7.1.4 configuration reflecting or directing sound off/from the ceiling.
  • I listen to
    • acoustic music (jazz, classical),
    • at conversation volume levels,
    • in the near-field (32" distance).
  • I am considering six Genelec 8331A speakers, based on the chart at bottom of post.
    • Height:11.25", 12" (with stand)
    • Width:7.5"
    • Depth:8.37"
Proof of a"2.2.3" concept is demonstrated in the following picture.

My impression is the 3D sense is expanded in all three dimensions. I can hear -- and feel -- the recording hall wall reflections and reverberations better, for example.


BTW, as an interesting aside about Dolby Atmos capability.

I feel the haptic feedback in the Dolby Atmos recording because my table vibrates from the subwoofer when only a single violin seems to play in the video. I can see the 20Hz frequency bar level on my RME DAC register when only a violin appears to play. This is a deliberate recording trick to exploit Dolby Atmos functionality. I didn't notice a cello, bassoon
or similar instrument. I suspect a single string was plucked.

The height sound effect is sensational!
  • The effect is greater than the fantastic Dolby Atmos recording.
  • I have listened to this recording more than a dozen times with my AudioEngine HD3 speakers connected in the conventional R and L channel manner.
    • Two REL TZero Mark III subwoofers are connected to the HD3 in the normal configuration.
    • Only one subwoofer when HD3 connected to R channel.

LineArraySpeakers.jpeg
Angled for a height effect, like Leaning Tower of Pisa.

IMG_0080.jpeg

Distance chart.png



I evaluate music on triad basis of:

  1. Power
  2. Color
  3. Pitch

  1. Rhythm
  2. Melody
  3. Harmony
The scale is:
-3 Worst
-2 Worse
-1 Bad
0 OK
+1 Good
+2 Better
+3 Best
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23

Proof of a"2.2.3" concept is demonstrated in the following picture.

My impression is the 3D sense is expanded in all three dimensions. I can hear -- and feel -- the recording hall wall reflections and reverberations better, for example.




The height sound effect is sensational!
  • The effect is greater than the fantastic Dolby Atmos recording.
  • I have listened to this recording more than a dozen times with my AudioEngine HD3 speakers connected in the conventional R and L channel manner.
    • Two REL TZero Mark III subwoofers are connected to the HD3 in the normal configuration.
    • Only one subwoofer when HD3 connected to R channel.
Angled for a height effect, like Leaning Tower of Pisa.

View attachment 140894
View attachment 140884


https://www.genelec.com/monitor-placement

Consider chair reclining angle. I recline further backwards in my seat for greater overhead effect.
  • Lower speakers will probably align with ears.
  • Higher speakers could be angled down 15 degrees.
  • Note "Isopod tilting"
    • 12" height adjustment range
monitorplacement-isopod-tilting.jpg

For typical two-way systems, the recommended height of the monitor acoustical axis is at the ear level,
  • usually between 1.2 and 1.4 metres from the floor.
  • Placing the monitors higher with a slight tilt will minimise floor reflections.
  • For standard stereo and multichannel reproduction,
    • do not lift the monitors so high that more than 15 degrees of tilt is required.
    • Monitors should always be aimed towards the listening position.
  • The higher the monitor is from the floor, the lower is the reflection induced frequency response irregularities.
  • However, half room height placement should be avoided, as at low frequencies the ceiling is typically also a reflective surface.
The most accurate stereo imaging can be achieved when the reflections are similar for the left and the right monitor in a stereo pair. This can be achieved by maintaining the same distance to the nearest side wall and the wall behind the monitor, placing the left and right monitors to the same height in the room, and placing the listening location symmetrically in the room in the left-right direction.

To avoid cancellation of audio because of the sound reflecting back from the wall behind the monitor, follow the placement guideline pictured below. The wall reflection happens at relatively low woofer frequencies only. Avoiding the cancellation is important because the reflected sound can reduce the woofer output causing the monitor low frequency output to appear to be too low, thus resulting e.g. on mistakes in the final mix in music production. To avoid the cancellation, place the monitor close enough to the wall. Typically the distance from the monitor front to the wall should be less than 60 centimetres. This ensures that the low frequency output is not reduced. Additionally, the monitor needs a minimum clearance of 5 cm to the wall to ensure full output from the rear bass reflex port.

monitorplacement_wall-distance.jpg

monitorplacement_wall-distance2.jpg



monitorplacement_subwooferbackwall.jpg

Regular_polygon_40.png

Two speakers per ear, or 2.2.2 speaker configuration. The cones represents hearing localization area for each ear. Dolby Atmos minimum is 5.1.2 or seven speakers plus subwoofer.
  • One speaker aims upwards towards the ear, displaced towards rear.
  • The other speaker points downward, pulled closer towards the ear.


Angles.png


Model for speaker distance (cylinder) and head (sphere).​


https://www.genelec.com/8331a
https://www.genelec.com/key-technologies/directivity-control-waveguide-technology

Directivity Control Waveguide (DCW™) for flat on- and off-axis response.
A revolutionary approach was taken by Genelec in 1983 with the development of its Directivity Control Waveguide (DCW™) used at the time in an egg-shaped enclosure. The Genelec DCW technology developed and refined over more than 30 years greatly improves the performance of direct radiating multi-way monitors.
The DCW technology shapes the emitted wavefront in a controlled way, allowing predictable tailoring of the directivity (dispersion) pattern. To make the directivity uniform and smooth, the goal is to limit the radiation angle so that the stray radiation is reduced. It results in excellent flatness of the overall frequency response as well as uniform power response. This advanced DCW technology minimizes early reflections and provides a wide and controlled listening area achieving accurate sound reproduction on- and off-axis.
Minimized early reflections and controlled, constant directivity have another important advantage: the frequency balance of the room reverberation field is essentially the same as the direct field from the monitors. As a consequence, the monitoring system's performance is less dependent on room acoustic characteristics.
Sound image width and depth, critical components in any listening environment, are important not only for on-axis listening, but also off-axis. This accommodates not only the engineer doing his or her job, but also others in the listening field, as is so often the case in large control rooms.
DCW™ Technology key benefits:
  • Flat on- and off-axis response for wider usable listening area
  • Increased direct-to-reflected sound ratio for reduced control room coloration
  • Improved stereo and sound stage imaging
  • Increased drive unit sensitivity up to 6 dB
  • Increased system maximum sound pressure level capacity
  • Decreased drive unit distortion
  • Reduced cabinet edge diffraction
  • Reduced complete system distortion

https://assets.ctfassets.net/4zjnzn...8a62/genelec_the_ones_brochure_2019_web_0.pdf

https://www.genelec.com/monitor-placement
https://assets.ctfassets.net/4zjnzn...478c083c419/GLM_4_System_Operating_Manual.pdf

GLM AND REFERENCE MONITORING


The frequency response of all monitors will change depending on their placement in a room, and therefore each needs to be aligned and calibrated after positioning to ensure reliable listening conditions. Genelec monitors have always featured manual EQ switches to compensate for placement, but THE ONES enable even more accurate automated compensation, allowing reference listening under previously intolerable conditions.
Using GLMTM (Genelec Loudspeaker Manager) software, Genelec smart active monitoring systems such as THE ONES can be easily installed, calibrated and managed, and the same monitors may even be used in more than one setup. In the GLM configuration pictured, you can switch between six calibrated setups: Mono, stereo, 5.1, 7.1, 7.1.2 and 7.1.4. When working in stereo, simply switch between multiple nearfield
and main monitors using the same physical two- channel output from a workstation. GLM supports all reproduction systems, even those with
a very high monitor and subwoofer count.
Drawn from decades’ worth of data gathered from thousands of studios, the GLM application quickly aligns the level, distance and frequency response of all monitors on the network; ensuring that setups work reliably and in compliance with the latest standards, independently of any external processing.
Moving beyond nearfield listening, a major challenge in reference monitoring is to obtain a low frequency response without pronounced peaks and dips, particularly with systems not built into a studio’s walls. GLM integrates closely with the new W371 Adaptive Woofer System to make such concerns a thing of the past, by creating free-standing monitoring systems uncompromised by LF colouration.

FR GLM.png


GLM Hexagons.png
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23

I evaluate music on triad basis of:

  1. Power
  2. Color
  3. Pitch

  1. Rhythm
  2. Melody
  3. Harmony
The -3 to +3 scale is:
-3 Worst
-2 Worse
-1 Bad
0 OK
+1 Good
+2 Better
+3 Best

The vertical soundbar has a much more pleasing "smooth sound gradient", which I rate at Better (+2). I rate the horizontal dispersion of the HD3 speakers as Bad (-1) for sound effects like thunder. Some might find the vertical sound bar too subtle or unintelligible.


300px-Radial-gradient.svg.png


440px-20151204-IMG_2634BlauGelb.jpg


The HD3 (horizontal dispersion) tonal quality is much higher than the vertical soundbar. The balance between smooth dispersion and rich tonal quality might be an implicit compromise that we often overlook. Sound qualities must be prioritized to reach the best compromise.

https://www.genelec.com/key-technologies/directivity-control-waveguide-technology

The DCW technology shapes the emitted wavefront in a controlled way, allowing predictable tailoring of the directivity (dispersion) pattern. To make the directivity uniform and smooth, the goal is to limit the radiation angle so that the stray radiation is reduced. It results in excellent flatness of the overall frequency response as well as uniform power response. This advanced DCW technology minimizes early reflections and provides a wide and controlled listening area achieving accurate sound reproduction on- and off-axis.

Minimized early reflections and controlled, constant directivity have another important advantage: the frequency balance of the room reverberation field is essentially the same as the direct field from the monitors. As a consequence, the monitoring system's performance is less dependent on room acoustic characteristics.

Sound image width and depth, critical components in any listening environment, are important not only for on-axis listening, but also off-axis. This accommodates not only the engineer doing his or her job, but also others in the listening field, as is so often the case in large control rooms.

https://www.genelec.com/key-technologies/minimum-diffraction-coaxial-driver-technology

  • Smoother frequency response
  • Ensures the drivers to couple coherently over their full operating bandwidth
  • Significantly improves the directivity control in the critical frequency range
  • Provides balanced suspension dynamics to minimize acoustic distortion

https://www.genelec.com/key-technologies/minimum-diffraction-enclosure-technology

A common problem with standard free-standing loudspeakers is that the front baffle discontinuities cause diffractions and the loudspeaker sharp corners act as secondary sources through reflections.
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
I evaluate music on triad basis of:

  1. Power
  2. Color
  3. Pitch

  1. Rhythm
  2. Melody
  3. Harmony
The -3 to +3 scale is:
-3 Worst
-2 Worse
-1 Bad
0 OK
+1 Good
+2 Better
+3 Best

The vertical soundbar has a much more pleasing "smooth sound gradient", which I rate at Better (+2). I rate the horizontal dispersion of the HD3 speakers as Bad (-1) for sound effects like thunder. Some might find the vertical sound bar too subtle or unintelligible.


300px-Radial-gradient.svg.png


440px-20151204-IMG_2634BlauGelb.jpg


The HD3 (horizontal dispersion) tonal quality is much higher than the vertical soundbar. The balance between smooth dispersion and rich tonal quality might be an implicit compromise that we often overlook. Sound qualities must be prioritized to reach the best compromise.

I feel safe with the Genelec GLM software as a versatile way to build a modular system. I will probably end up with an active Genelec 7.2.4 system. I have a 7.2.0 AV receiver and speakers that has fallen into disuse. I am very reluctant to make another big investment. Genelec allows me to incrementally build.

I like jazz as much as classical. I find this hybrid horizontal/vertical speaker dispersion prototype system works very well for classical.

Jazz is recorded by encoding instruments into discrete channels. Having a different type of speaker dispersion for each channel results in poor sound for jazz.

https://www.genelec.com/-/case-study-land-of-the-rising-sam-imagica-opts-for-genelec

Although effective 5.1 monitoring requires precision in speaker position, balance and flight time adjustments, Genelec Loudspeaker Manager (GLM™ 2.0) software allows the user to easily establish an ideal surround sound monitoring environment. GLM™ 2.0 can be used as a versatile monitor controller to perform instantaneous switching between 5.1 surround sound and 2-channel stereo operations as and when required.

I considered buying Apple AirPods Max Pro to hear Dolby Atmos. However, that functionality is only available to me over bluetooth. The soundboard on my M1 MacBook Pro does not sound as good as my RME DAC. Connecting the headphones to my RME DAC means I lose the Dolby Atmos. I do not see the benefit.
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23




Proof of a"2.2.3" concept is demonstrated in the following picture.

My impression is the 3D sense is expanded in all three dimensions. I can hear -- and feel -- the recording hall wall reflections and reverberations better, for example.




The height sound effect is sensational!
  • The effect is greater than the fantastic Dolby Atmos recording.
  • I have listened to this recording more than a dozen times with my AudioEngine HD3 speakers connected in the conventional R and L channel manner.
    • Two REL TZero Mark III subwoofers are connected to the HD3 in the normal configuration.
    • Only one subwoofer when HD3 connected to R channel.
Angled for a height effect, like Leaning Tower of Pisa.

View attachment 140894
View attachment 140884


I evaluate music on triad basis of:

  1. Power
  2. Color
  3. Pitch

  1. Rhythm
  2. Melody
  3. Harmony
The scale is:
-3 Worst
-2 Worse
-1 Bad
0 OK
+1 Good
+2 Better
+3 Best

The argument for a 2.2.2 system over a 5.1.4 or 7.1.4 appears to be strengthen from the following. However, how to improve the sense of height in audio in specific technical terms, still remains unclear.

The key points:
  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.
Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.
      • Frequencies above 7kHz are required for vertical sound resolution
    • Particularly frequencies below 1kHz for lateral sound resolution
Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.
  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments

1626345933333.png

https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization#HATS:_Head_and_Torso_Simulator

Brüel’s & Kjær’s Head And Torso Simulator (HATS) is a mannequin prototype with built-in ear and mouth simulators that provides a realistic reproduction of the acoustic properties of an average adult human head and torso. It is designed to be used in electro-acoustics tests, for example, headsets, audio conference devices, microphones, headphones and hearing aids. Various existing approaches are based on this structural model.[6]


https://www.nist.gov/publications/nist-hearing-aid-test-procedures-and-test-data

A description is given of the hearing aid testing done for the Department of Veterans Affairs. Emphasis is given to the determination of the insertion frequency response using Fast Fourier Transform (FFT) techniques, but the other measurements entailed are delineated. These include the measurement of saturation and sound pressure level, telephone coil sensitivity, total harmonic distortion, equivalent input noise level, and battery drain. The data resulting from these tests are included.

FR curve of both ears:

https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization

Emulating the mechanisms of binaural hearing can improve recognition accuracy and signal separation in DSP algorithms, especially in noisy environments.[3] Furthermore, by understanding and exploiting biological mechanisms of sound localization, virtual sound scenes may be rendered with more perceptually relevant methods, allowing listeners to accurately perceive the locations of auditory events.[4] One way to obtain the perceptual-based sound localization is from the sparse approximations of the anthropometric features. Perceptual-based sound localization may be used to enhance and supplement robotic navigation and environment recognition capability.[1] In addition, it is also used to create virtual auditory spaces which is widely implemented in hearing aids.

https://en.wikipedia.org/wiki/Perce...lization#Problem_Statement_and_Basic_Concepts

While the relationship between human perception of sound and various attributes of the sound field is not yet well understood,[2] DSP algorithms for sound localization are able to employ several mechanisms found in neural systems, including the

https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization#Particle_Based_Tracking

It is essential to be able to analyze the
  • distance and
  • intensity
  • of various sources in a spatial domain.
  • We can track each such sound source,
    • by using a probabilistic temporal integration,
    • based on data obtained through a microphone array and
    • a particle filtering tracker.
    • Using this approach, the Probability Density Function (PDF) representing the location of each source is represented as a set of particles to which different weights (probabilities) are assigned.
    • The choice of particle filtering over
      • Kalman filtering is further justified by
      • the non-gaussian probabilities arising from false detections and multiple sources.[7]
https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization#ITD,_ILD,_and_IPD


According to the duplex theory, ITDs have a greater contribution to the localisation of
  • low frequency sounds (below 1 kHz),[4] while
  • ILDs are used in the localisation of high frequency sound.
  • These approaches can be applied to selective reconstructions of spatialized signals, where spectrotemporal components believed to be dominated by the desired sound source are identified and isolated through the Short-time Fourier transform (STFT).
  • Modern systems typically compute the STFT of the incoming signal from two or more microphones, and estimate the ITD or each spectrotemporal component by comparing the phases of the STFTs.
    • An advantage to this approach is that it may be
      • generalized to more than two microphones,
      • which can improve accuracy in 3 dimensions and
      • remove the front-back localization ambiguity
        • that occurs with only two ears or microphones.[1]
    • Another advantage is that the ITD is relatively strong and easy to obtain
      • without biomimetic instruments such as dummy heads and artificial pinnae,
      • though these may still be used to enhance amplitude disparities.
      • [1] HRTF phase response is
        • mostly linear and
        • listeners are insensitive to the details of the interaural phase spectrum
          • as long as the interaural time delay (ITD) of the combined low-frequency part of the waveform is maintained.

https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization#ITD,_ILD,_and_IPD

Interaural level difference (ILD) is
  • best for high frequency sounds because low frequency sounds are not attenuated much by the head.
  • ILD (also known as Interaural Intensity Difference) arises when
    • the sound source is not centred,
    • the listener's head partially shadows the ear opposite to the source,
      • diminishing the intensity of the sound in that ear (particularly at higher frequencies).
  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.

Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.

https://en.wikipedia.org/wiki/Perceptual-based_3D_sound_localization#ITD,_ILD,_and_IPD
  • According to the duplex theory,
    • ITDs have a greater contribution to the localisation of low frequency sounds (below 1 kHz),while
    • ILDs are used in the localisation of high frequency sound.[8]
      • The ILD arises from the fact that, a sound coming from a source located to one side of the head will have a higher intensity, or be louder, at the ear nearest the sound source.
      • One can therefore create the illusion of a sound source emanating from one side of the head merely by adjusting the relative level of the sounds that are fed to two separated speakers or headphones.
        • This is the basis of the commonly used pan control.

Interaural Phase Difference (IPD) refers to the
  • difference in the phase of a wave that reaches each ear, and
  • is dependent on the
    • frequency of the sound wave and the
    • interaural time differences (ITD).[8]

Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.

https://en.wikipedia.org/wiki/Interaural_time_difference

The interaural time difference (or ITD) when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it
  • provides a cue to the direction or angle of the sound source from the head.
  • If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear.
  • This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.
  • When a signal is produced in the horizontal plane, its angle in relation to the head is referred to as its azimuth, with 0 degrees (0°) azimuth being directly in front of the listener, 90° to the right, and 180° being directly behind.

Head-related transfer functions
  • contain all the descriptors of localization cues such as ITD and IID as well as monaural cues.
  • Every HRTF uniquely represents the transfer of sound from
    • a specific position in 3D space to the ears of a listener.
  • The decoding process performed by the auditory system can be imitated using an artificial setup consisting of
    • two microphones,
    • two artificial ears and a
    • HRTF database.[10]
  • To determine the position of an audio source in 3D space, the ear input signals are convolved with the inverses of all possible HRTF pairs, where the correct inverse maximizes cross-correlation between the convolved right and left signals.
  • In the case of multiple simultaneous sound sources, the transmission of sound from source to ears can be considered a multiple-input and multiple-output. Here, the HRTFs the source signals were filtered with en route to the microphones can be found using methods such as convolutive blind source separation, which has the advantage of efficient implementation in real-time systems. Overall, these approaches using HRTFs can be well optimized to localize multiple moving sound sources.[10]

  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
The argument for a 2.2.2 system over a 5.1.4 or 7.1.4 appears to be strengthen from the following. However, how to improve the sense of height in audio in specific technical terms, still remains unclear.

The key points:
  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.

Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.
Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.
  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments







FR curve of both ears:







  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.

Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.



Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.





  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5036260/

In both experiments, acoustic stimuli with 44.1 kHz sampling frequency and 16 bit quantization depth were generated on a PC running MATLAB. The applied soundcard type was RME Fireface UC. Stimuli were presented to BiCI users via a stereo audio cable connected to the auxiliary inputs of the OPUS 2 processors, and to NH listeners, via Sennheiser HD 280 Pro headphones.
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
The argument for a 2.2.2 system over a 5.1.4 or 7.1.4 appears to be strengthen from the following. However, how to improve the sense of height in audio in specific technical terms, still remains unclear.

The key points:
  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.
Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.
Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.
  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments







FR curve of both ears:







  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.

Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.



Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.





  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments


https://en.wikipedia.org/wiki/Auricle_(anatomy)#Functions

In animals the function of the pinna is to collect sound, and perform spectral transformations to incoming sounds which enable the process of
  • vertical localization to take place.[2]
  • It collects sound by acting as a funnel, amplifying the sound and directing it to the auditory canal. While reflecting from the pinna,
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23

This research indicates that rotating in a chair resolves auditory height. The question is:
  • How many speakers are needed for auditory height resolution?
    • My guess is the two front LR speakers used in a conventional 2.1 system
      • Two additional rear speakers 180 degrees opposite?
  • Rotate to which points in the following circle?
    • The 40 dots are approximately ten degrees apart.
      • Simply rotating between a few dots might resolve auditory height.
    • My guess is rotate 45 degrees so each ear points directly at the nearest speaker.
    • With the LR speakers at 60 degrees from center:
      • Rotation between 255 degrees and 105 degrees might be minimum to resolve height.
        • Both ears pointing at a new set of speakers might be most effective?
  • According to the research, overhead speaker do not contribute height information
    • Overhead speakers are channel and/or object related in Dolby Atmos.
    • A horseshoe shape

Rotating chair for height resolution.3.png

  • showing that it was the dynamic change in the binaural cues during head movement that
    • allowed the sound to be correctly localized in the vertical dimension.
    • by seating the blindfolded subject in a rotating chair.
https://en.wikipedia.org/wiki/Sound_localization#Dynamic_binaural_cues

  • Binaural localization, however, was possible with lower frequencies. This is likely due to the pinna being small enough to only interact with sound waves of high frequency.[17] It seems that people can only
    • accurately localize the elevation of sounds that are
      • complex and
      • include frequencies above 7,000 Hz,
      • and a pinna must be present.[18]

Dynamic binaural cues

  • When the head is stationary,
    • the binaural cues for lateral sound localization
      • (interaural time difference and
      • interaural level difference)
        • do not give information about the location of a sound in the median plane.
    • Identical ITDs and ILDs can be produced by
      • sounds at eye level or
      • at any elevation,
      • as long as the lateral direction is constant.
      • However, if the head is rotated,
        • the ITD and ILD change dynamically,
          • and those changes are different for sounds at different elevations.
        • For example, if an eye-level sound source is straight ahead and the head turns to the left, the sound becomes louder (and arrives sooner) at the right ear than at the left.
        • But if the sound source is directly overhead, there will be
          • no change in the ITD and ILD as the head turns.
        • Intermediate elevations will produce intermediate degrees of change, and
          • if the presentation of binaural cues to the two ears during head movement is reversed,
            • the sound will be heard behind the listener.[13][19] Hans Wallach[20] artificially altered a sound's binaural cues during movements of the head. Although the sound was objectively placed at eye level, the
          • dynamic changes to ITD and ILD as the head rotated were those that would be
            • produced if the sound source had been elevated.
          • In this situation, the sound was heard at the synthesized elevation.
          • The fact that the sound sources objectively remained at eye level
            • prevented monaural cues from specifying the elevation,
            • showing that it was the dynamic change in the binaural cues during head movement that
              • allowed the sound to be correctly localized in the vertical dimension.
          • The head movements need not be actively produced;
            • accurate vertical localization occurred in a similar setup when the head rotation was produced passively,
              • by seating the blindfolded subject in a rotating chair.
              • As long as the dynamic changes in binaural cues accompanied a perceived head rotation,
                • the synthesized elevation was perceived
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
This research indicates that rotating in a chair resolves auditory height. The question is:
  • How many speakers are needed for auditory height resolution?
    • My guess is the two front LR speakers used in a conventional 2.1 system
      • Two additional rear speakers 180 degrees opposite?
  • Rotate to which points in the following circle?
    • The 40 dots are approximately ten degrees apart.
    • My guess is rotate 45 degrees so each ear points directly at the nearest speaker.
    • With the LR speakers at 60 degrees from center:
      • Rotation between 255 degrees and 105 degrees might be minimum to resolve height.
        • Both ears pointing at a new set of speakers might be most effective?
  • According to the research, overhead speaker do not contribute height information
    • Overhead speakers are channel and/or object related in Dolby Atmos.


  • showing that it was the dynamic change in the binaural cues during head movement that
    • allowed the sound to be correctly localized in the vertical dimension.
    • by seating the blindfolded subject in a rotating chair.

  • sound that dynamically adjusts as you turn your head.
https://support.apple.com/en-us/HT212182

Is spatial audio with dynamic head tracking available for music?

We are excited to announce that spatial audio with dynamic head tracking is coming to Apple Music in the fall. Dynamic head tracking creates an even more immersive experience for spatial audio. It brings music to life by delivering
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
  • sound that dynamically adjusts as you turn your head.

Verify vertical sound localization for yourself by trying to locate bird height by only rotating your head.

I was startled by the effectiveness. If you are familiar with the Confluence Park area of Denver, you will know that the streets lie on several elevations. I actually go over and under roads, highways, light rail and train tracks and water many times. Perhaps, over a dozen times on my daily rides. I certain bicycle in 3D.

I was testing vertical sound localization for birds and was stunned to discover two trains that I was completely unaware of due to highway noise masking. The first time the train was above me. The second time the train was below me as I climbed a bike path ramp.

https://en.wikipedia.org/wiki/Auditory_masking#Off_frequency_listening

Auditory masking occurs when the perception of one sound is affected by the presence of another sound.[1]

Auditory masking in the frequency domain is known as simultaneous masking, frequency maskingor spectral masking. Auditory masking in the time domain is known as temporal masking or non-simultaneous masking.

Auditory masking is used in tinnitus maskers to suppress annoying ringing, hissing, or buzzing or tinnitus often associated with hearing loss. It is also used in various kinds of audiometry, including pure tone audiometry, and the standard hearing test to test each ear unilaterally and to test speech recognition in the presence of partially masking noise.

Auditory masking is exploited to perform data compression for sound signals (MP3).

https://en.wikipedia.org/wiki/Tinnitus_masker#Acoustic_qualities

Tinnitus maskers may use music or natural sounds, wide band white or pink noise, narrow band white noise, a notched soundfield, frequency or amplitude modulated sound, intermittent pulsed sound, or other patterned sound. Temporally patterned sound may be more effective than white noise or background music in masking tinnitus.[2]
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
https://www.the-home-cinema-guide.com/av-receiver-listening-modes-explained.html

As well as onboard decoders, AV receivers will also come with further audio processing options. Often called DSP, or Digital Signal Processing.

These extra audio processing features add extra playback options. They happen after the original soundtrack is decoded.

  • Virtual: creates a virtual 3D effect with no height speakers

With the new Ultra HD Blu-ray specification, there have been two new optional audio formats added:

  • Dolby Atmos
  • DTS:X
  • Dolby Atmos: exactly reproduces audio recorded in Dolby Atmos audio & received via bitstream (HDMI only). Can also be used on speaker systems with various speaker layouts – including 2.0/2.1, 3.0/3.1, 4.0/4.1, 5.0/5.1, 6.0/6.1, 7.0/7.1, 2.0.2/2.1.2 and 3.0.2/3.1.2
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
https://www.avsforum.com/threads/us...d-speakers-in-a-2-1-2-channel-system.2882393/

The main thing I learned in this process is that, to my ears, pure 2-channel sound was never as good as 2-channel sound with Atmos-enabled speakers and Dolby Surround ambience extraction. Each end every time I compared the two, the improved sense of space the height and ambience was there.

A good hypothesis is one thing, but what I'm interested in is tangible results. Since we're talking 2-channel audio, I wanted to put together a 2.2.2 speaker outfit using high-quality gear. I sought a solution that solidly bridges the gap between multi-channel AV systems and high-performance stereo systems, so I settled on KEF's R500 tower speakers ($2599/pair), R400b subs ($1699) and R50 Atmos-enabled modules ($1199/pair). Yes, that's a $7196 2-channel Atmos-enabled speaker system, if you are counting. And it sounded amazing.

The AVR running the show was more affordable. I used a Denon X4300H AVR ($1499) for processing and amplification—including Audyssey room correction. Because the power supply was not spread thin powering nine channels, the AVR delivered ample wattage to the towers and Atmos-enabled modules—especially with dual subs handling deep bass.

Of course, when listening to Atmos-encoded soundtracks, the discrete height effects are present in the 2.2.2 system. This can add quite a bit to the viewing experience. It's not quite the boost in immersiveness that going from 2 to 5.1 channels provides, but it counts as a sizeable leap forward nonetheless.

If you already have an Atmos-enabled surround-sound system, you can try out 2.0.2, 2.1.1, or 2.2.2 listening by altering your AVR or pre/pro's speaker setup. And if you do check it, don't be afraid to play with the levels of the elevation channels. When upmixing music, you can fine-tune the amount of ambience extraction that way. You might discover you really like what it does for music, I know I did.
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23

https://en.wikipedia.org/wiki/Dolby_Pro_Logic#Dolby_Pro_Logic

Dolby Surround and Dolby Pro Logic decoders are similar in principle, as both use matrix technology to extract extra channels from Dolby Stereo stereo-encoded audio. The terms Dolby Stereo, Dolby Surround and Lt/Rt are all used to describe soundtracks that are matrix-encoded using this technique.[2]

Because of the limited nature of the original DPL, many consumer electronics manufacturers introduced their own processing circuitry, such as the "Jazz", "Hall", and "Stadium" modes found on most common home audio receivers. DPL II forgoes this type of processing and replaces it with simple servo (negative feedback) circuits used to derive five channels.
  • The extra channel content is extracted using the difference between the spatial audio content between two individual channels of stereo tracks or Dolby Digital encoded 5.1 channel tracks and outputs it appropriately.
  • In addition to five full-range playback channels, Pro Logic II introduced a Music mode that includes optimized channel delays and adds user controls to—for example—adjust apparent front sound stage width.
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
This research indicates that rotating in a chair resolves auditory height. The question is:
  • How many speakers are needed for auditory height resolution?
    • My guess is the two front LR speakers used in a conventional 2.1 system
      • Two additional rear speakers 180 degrees opposite?
  • Rotate to which points in the following circle?
    • The 40 dots are approximately ten degrees apart.
    • My guess is rotate 45 degrees so each ear points directly at the nearest speaker.
    • With the LR speakers at 60 degrees from center:
      • Rotation between 255 degrees and 105 degrees might be minimum to resolve height.
        • Both ears pointing at a new set of speakers might be most effective?
  • According to the research, overhead speaker do not contribute height information
    • Overhead speakers are channel and/or object related in Dolby Atmos.


  • showing that it was the dynamic change in the binaural cues during head movement that
    • allowed the sound to be correctly localized in the vertical dimension.
    • by seating the blindfolded subject in a rotating chair.

https://en.wikipedia.org/wiki/Quadraphonic_sound

Derived (2-2-4) formats are simple and inexpensive electronic solutions that add or extract rear "ambience" or "reverberation" sound channels from stereo records. There is no precise placement of individual instruments in the rear channels.[3]


https://en.wikipedia.org/wiki/Ambisonics

Ambisonics is a full-sphere surround sound format: in addition to the horizontal plane, it covers sound sources above and below the listener.[1]

Unlike other multichannel surround formats, its transmission channels do not carry speaker signals. Instead, they contain a speaker-independent representation of a sound field called B-format, which is then decoded to the listener's speaker setup. This extra step allows the producer to think in terms of source directions rather than loudspeaker positions, and offers the listener a considerable degree of flexibility as to the layout and number of speakers used for playback.
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
The argument for a 2.2.2 system over a 5.1.4 or 7.1.4 appears to be strengthen from the following. However, how to improve the sense of height in audio in specific technical terms, still remains unclear.

The key points:
  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.
Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.
Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.
  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments







FR curve of both ears:







  • The pinnae filters the sound in a way that is directionally dependent.
    • This is particularly useful in determining if a sound comes from above, below, in front, or behind.

Interaural time and level differences (ITD, ILD) play a role
  • in azimuth perception
    • but can’t explain vertical localization.



Once the brain has analyzed IPD, ITD, and ILD,
  • the location of the sound source can be determined with relative accuracy.





  • The average human has the remarkable ability to locate a sound source with better than 5◦ accuracy in both azimuth and elevation, in challenging environments

https://en.wikipedia.org/wiki/Horseshoe_map

In the mathematics of chaos theory, a horseshoe map is any member of a class of chaotic maps of the square into itself. It is a core examplein the study of dynamical systems. The map was introduced by Stephen Smale while studying the behavior of the orbits of the van der Pol oscillator. The action of the map is defined geometrically by squishing the square, then stretching the result into a long strip, and finally folding the strip into the shape of a horseshoe.

https://en.wikipedia.org/wiki/Dynamical_system

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.
 
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
This research indicates that rotating in a chair resolves auditory height. The question is:
  • How many speakers are needed for auditory height resolution?
    • My guess is the two front LR speakers used in a conventional 2.1 system
      • Two additional rear speakers 180 degrees opposite?
  • Rotate to which points in the following circle?
    • The 40 dots are approximately ten degrees apart.
    • My guess is rotate 45 degrees so each ear points directly at the nearest speaker.
    • With the LR speakers at 60 degrees from center:
      • Rotation between 255 degrees and 105 degrees might be minimum to resolve height.
        • Both ears pointing at a new set of speakers might be most effective?
  • According to the research, overhead speaker do not contribute height information
    • Overhead speakers are channel and/or object related in Dolby Atmos.
    • A horseshoe shape


  • showing that it was the dynamic change in the binaural cues during head movement that
    • allowed the sound to be correctly localized in the vertical dimension.
    • by seating the blindfolded subject in a rotating chair.

Dolby 5.1 and 7.1 Surround Sound is a horseshoe shape

1626369146520.png

1626369642439.png


https://en.wikipedia.org/wiki/5.1_surround_sound
5.1 surround sound ("five-point one") is the common name for surround sound audio systems. 5.1 is the most commonly used layout in home theatres.[1] It uses five full bandwidth channels and one low-frequency effects channel (the "point one").[2] Dolby Digital, Dolby Pro Logic II, DTS, SDDS, and THX are all common 5.1 systems. 5.1 is also the standard surround sound audio component of digital broadcast and music.[3]

All 5.1 systems use the same speaker channels and configuration, having a front left and right, a center channel, two surround channels(left and right) and the low-frequency effects channel designed for a subwoofer.

https://en.wikipedia.org/wiki/7.1_surround_sound

7.1 surround sound is the common name for an eight-channel surround audio system commonly used in home theatre configurations. It adds two additional speakers to the more conventional six-channel (5.1) audio configuration. As with 5.1 surround sound, 7.1 surround sound positional audio uses the standard front left and right, center, and LFE (subwoofer) speaker configuration. However, whereas a 5.1 surround sound system combines both surround and rear channel effects into two channels (commonly configured in home theatre set-ups as two rear surround speakers), a 7.1 surround system splits the surround and rear channel information into four distinct channels, in which sound effects are directed to left and right surround channels, plus two rear surround channels.

In a 7.1 surround sound home theatre set-up, the surround speakers are placed to the side of the listener's position and the rear speakers are placed behind the listener.[1] In addition, with the advent of Dolby Pro Logic IIz and DTS Neo:X, 7.1 surround sound can also refer to 7.1 surround sound configurations with the addition of two front height channels positioned above the front channels or two front wide channels positioned between the front and surround channels.[2]
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
Dolby 5.1 is a horseshoe shape


I feel safe with the Genelec GLM software as a versatile way to build a modular system. I will probably end up with an active Genelec 7.2.4 system. I have a 7.2.0 AV receiver and speakers that has fallen into disuse. I am very reluctant to make another big investment. Genelec allows me to incrementally build from a simple 2.2.2 to any higher scale.

https://www.genelec.com/-/case-study-land-of-the-rising-sam-imagica-opts-for-genelec

Although effective 5.1 monitoring requires precision in speaker position, balance and flight time adjustments, Genelec Loudspeaker Manager (GLM™ 2.0) software allows the user to easily establish an ideal surround sound monitoring environment. GLM™ 2.0 can be used as a versatile monitor controller to perform instantaneous switching between 5.1 surround sound and 2-channel stereo operations as and when required.
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
This research indicates that rotating in a chair resolves auditory height. The question is:
  • How many speakers are needed for auditory height resolution?
    • My guess is the two front LR speakers used in a conventional 2.1 system
      • Two additional rear speakers 180 degrees opposite?
  • Rotate to which points in the following circle?
    • The 40 dots are approximately ten degrees apart.
      • Simply rotating between a few dots might resolve auditory height.
    • My guess is rotate 45 degrees so each ear points directly at the nearest speaker.
    • With the LR speakers at 60 degrees from center:
      • Rotation between 255 degrees and 105 degrees might be minimum to resolve height.
        • Both ears pointing at a new set of speakers might be most effective?
  • According to the research, overhead speaker do not contribute height information
    • Overhead speakers are channel and/or object related in Dolby Atmos.
    • A horseshoe shape


  • showing that it was the dynamic change in the binaural cues during head movement that
    • allowed the sound to be correctly localized in the vertical dimension.
    • by seating the blindfolded subject in a rotating chair.

Try this simple experiment to notice how diffuse or concentrated directionality can be your 2.1 or higher speaker configuration.
  • Rotate 90 degrees to left and right, through the speakers to notice differences
    • Lateral sound resolution below 1kHz
    • Vertical sound resolution above 7kHz
    • The 40 dots roughly represent ten degree intervals.
    • My R channel is two horizontal dispersion speakers (AudioEngine HD3)
      • 32" away.
    • The L channel is a 24" Soundcore soundbar, standing upright for vertical dispersion.
    • It seems to me on The Doors, Dolby Atmos, Rider on the Storm:
      • that the kick drums and bass can be dialed in a very concentrated or diffuse in the R channel, as my R ear points more directly or away.
      • My guess the strongest directionality is within a five to ten degree range.
      • The same seems to generally true of the organ in the L channel.
      • Seems like a lot pans in/out of focus around the center.
        • Especially with special effects like thunder and rain.
    • I see full 7.1 channel control as a big benefit over sound coloration.
    • Feels like I have walked from one side of the stage to the other, after rotating, with closed eyes, from one side to the other. Dexter Gordon:
Rotating chair for height resolution.1.png

The vertical soundbar has a much more pleasing "smooth sound gradient", which I rate at Better (+2). I rate the horizontal dispersion of the HD3 speakers as OK (0) for sound effects like thunder. Some might find the vertical sound bar too subtle or unintelligible.
  • I don't know what degree of naturalness The Doors were trying to achieve with sound effects.
    • Heavy downpouring rain usually has thunder claps that crack sharply from nearly directly above.
      • The sound is frightening in Colorado.
      • Can be 120db at 100Hz.
    • Seems like they used rain and thunder from different sources.
      • I think this song has a range from great (at beginning) to poor (towards end) examples of natural sounds.
        • Seems like they were having fun with the sound effects at the end.

I evaluate music on triad basis of:
  1. Power
  2. Color
  3. Pitch

  1. Stagesound Height
  2. Stagesound Width
  3. Stagesound Depth

  1. Rhythm
  2. Melody
  3. Harmony
The -3 to +3 scale is:
-3 Worst
-2 Worse
-1 Bad
0 OK
+1 Good
+2 Better
+3 Best


300px-Radial-gradient.svg.png



440px-20151204-IMG_2634BlauGelb.jpg

The HD3 (horizontal dispersion) tonal quality is much higher than the vertical soundbar. The balance between smooth dispersion and rich tonal quality might be an implicit compromise that we often overlook. Sound qualities must be prioritized to reach the best compromise.

440px-Refraction_on_an_aperture_-_Huygens-Fresnel_principle.svg.png
Sound from an array spreads less than sound from a point source, by the Huygens–Fresnel principleapplied to diffraction.

https://en.wikipedia.org/wiki/Directional_sound

While a large loudspeaker is naturally more directional because of its large size, a source with equivalent directivity can be made by utilizing an array of traditional small loudspeakers, all driven together in-phase. Acoustically equal to a large speaker, this creates a larger source size compared to wavelength, and the resulting sound field is narrowed compared to a single small speaker. Large speaker arrays have been used in hundreds of arena sound systems to mitigate noise that would ordinarily travel to adjoining neighborhoods, as well as limited applications in other applications where some degree of directivity is helpful, such as museums or similar display applications that can tolerate large speaker dimensions.

Traditional speaker arrays can be fabricated in any shape or size, but a reduced physical dimension (relative to wavelength) will inherently sacrifice directivity in that dimension. The larger the speaker array, the more directional, and the smaller the size of the speaker array, the less directional it is. This is fundamental physics, and cannot be bypassed, even by using phased arrays or other signal processing methods. This is because the directivity pattern of any wave source is the Fourier Transform of the source function.[1] Phased array design is, however, sometimes useful for beamsteering, or for sidelobe mitigation, but making these compromises necessarily reduces directivity.
 
Last edited:
OP
M

mel

Senior Member
Joined
Jun 13, 2021
Messages
411
Likes
23
Can super tweeter improve audio height resolution?

Audio height resolution requires at least 7kHz frequency.

https://www.aperionaudio.com/blogs/aperion-audio-blog/an-introduction-of-the-super-tweeter-speaker

https://www.aperionaudio.com/produc...-motion-ribbon-super-tweeter-speaker-amt-pair

Frequency Response 8 kHz - 40 kHzCrossover Points 8K, 10K, 12K, 14K, 16K & off

https://www.parts-express.com/Dayto...fYKCjrftjt2yfOqGswFcr4cUcR-WeCKYaArAQEALw_wcB

Key Features
  • Superb resolution and detail for the most discerning audiophile
  • Wide horizontal and narrow vertical dispersion patterns
  • Perfect for use in line arrays
  • 3 kHz recommended crossover frequency at 12 dB per octave
 
Last edited:
Status
Not open for further replies.
Top Bottom