• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Totem Acoustics Rainmaker Speaker Review

Rate this speaker:

  • 1. Poor (headless panther)

    Votes: 171 68.7%
  • 2. Not terrible (postman panther)

    Votes: 69 27.7%
  • 3. Fine (happy panther)

    Votes: 6 2.4%
  • 4. Great (golfing panther)

    Votes: 3 1.2%

  • Total voters
    249
+1 …..and still people are are comparing DAC’s or amps with in room recordings of speakers sound on YouTube.

Or any speaker sound recorded on YouTube ?

And actual recordings, it can’t be stated enough that microphones are not ears :) you can not really document an acoustic event as a human mind hears the same event . Great production with the tools of the trade makes enjoyable ”fakes” that apear realistic over speakers, that’s also awesome that it actually works reasonably well.

But science knows some things about the human ear brain system I think hence a question.

Is there any plugin or sfx trying to mimic the human hearing and apply this filtering to a microphone signal ?
For headphones only there are these dummy head recordings that smartly moves the process to your own personal filter in your brain.

Simplest attempt to make microphones pick up the ”essential” stuff and disregard the ”unwanted” is probably different coverage patterns like omni or cardioids btw is not Blumlein an early stereo recording pioneer who invented a microphone placement technique among many other things ? :)

Just thinking if anyone has taken further steps like recording omni and have the polar pattern applied after the fact , but maybe the true directional information is lost ..
Certainly there are microphones that can be any pattern after the fact including the direction in which it is looking. Blumlein crossed figure 8's allow this to a limited extent. Something like the tetrahedral soundfield type mics allow more choice after the fact of recording.

I don't know how the filtering is done by our brain and ears to mostly ignore early reflections. Maybe @j_j knows the answer to this.
 
I don't know how the filtering is done by our brain and ears to mostly ignore early reflections. Maybe @j_j knows the answer to this.

A big part is cochlear dynamics. When a sound onset happens, it causes depolarization of some of the outer hair cells. This, to some extent, reduces the loudness (perceptual term, remember, not SPL) of the signal that arrives afterwards. So, to some extent, "leading edge wins". In addition higher parts of the CNS seem to identify somehow places where the leading edge lines up across frequency, providing some more reduction of "what comes immediately after". This inhibition happens at signal onset (it takes a millisecond for the inner hair cell to recharge) and then the outer hair cell has depolarizated, and you have a few more milliseconds of "less gain" before the system, as it were, resets.

This is part of the system. I refuse to even speculate on how the CNS handles this.
 
i would add what does work for reviews, Erins audio corner youtube: speaker in-room behaviour comparison against original-signal test using convulation filter on test for speaker version or such, i do like to listen speaker vs original clip test there :)
 
Last edited:
i would add what does work for reviews, Erins audio corner youtube: speaker in-room behaviour comparison against original-signal test using convulation filter on test for speaker version or such, i do like to listen speaker vs original clip test there :)

How do you deal with direct vs. indirect sound in that case?
 
I don't know how the filtering is done by our brain and ears to mostly ignore early reflections. Maybe @j_j knows the answer to this.


You or We can see more of what j_j mentioned in the presentation.

When you record speakers and then play the recording you hear the reflections clearly because in the recording they are coming from the speaker and our ears cannot filter it out since it is in the source. It is quite amazing that we don't hear those as they are substantial.
I'm not sure if it's exactly in the same context as what you wrote, but I remember seeing something in one of David Griesinger's papers discussing the difference between hearing something live and listening to a recording.
(I can’t seem to find which paper it was, though.)
It mentioned that the reflections in the recording sounded excessive and reverberant, but when heard in the actual space, it didn’t sound that way at all.
 
It mentioned that the reflections in the recording sounded excessive and reverberant, but when heard in the actual space, it didn’t sound that way at all.

This is true only if the complexity of the reverberation coming from at least 5, better 7 or more, sources, is not sufficient. If it is sufficient (and of the proper decorrelation and complexity) then you get the same sensation as the actual space.

Even with 5 microphones (See the PSR work from late 1990's from AT&T Research) this works pretty well.
 
A big part is cochlear dynamics. When a sound onset happens, it causes depolarization of some of the outer hair cells. This, to some extent, reduces the loudness (perceptual term, remember, not SPL) of the signal that arrives afterwards. So, to some extent, "leading edge wins". In addition higher parts of the CNS seem to identify somehow places where the leading edge lines up across frequency, providing some more reduction of "what comes immediately after". This inhibition happens at signal onset (it takes a millisecond for the inner hair cell to recharge) and then the outer hair cell has depolarizated, and you have a few more milliseconds of "less gain" before the system, as it were, resets.

This is part of the system. I refuse to even speculate on how the CNS handles this.
"leading edge wins". JJ is right. Put into acoustical terms, the first arriving, direct, sound has perceptual advantages over later arrivals. Evidence of this exists in the very first of my JAES publications, back in 1985/86 in which double-blind multiple-loudspeaker comparison results clearly showed that the loudspeakers receiving the highest sound quality ratings had the flattest, smoothest anechoic on-axis frequency responses. Those that were also well behaved off axis received even higher ratings. In the hundreds of tests done in subsequent decades this has just been reinforced. In Sean Olive's subjective/objective correlations it was statistically confirmed, and then, in a clever separate test, trained listeners were asked to draw the spectrum of the "sound" they heard from various loudspeakers. They drew the direct sound spectra. All of this is in AES papers and summarized in my books. More in the upcoming 4th edition.

This is why steady-state "room curves" are not definitive descriptors of what we hear above the transition/Schroeder frequency.
 
It mentioned that the reflections in the recording sounded excessive and reverberant, but when heard in the actual space, it didn’t sound that way at all.
The principle factor is binaural hearing, which allows us to perceptually discriminate between the first arriving sound from one direction, and later arriving sounds from different directions. In stereo recordings, microphones don't capture this information in the form required by the brain - binaural, dummy head, recordings are needed. These days good binaural recordings reproduced through "neutral" headphones equipped with head-position tracking can be amazingly realistic. Stereo is not capable of delivering the necessary long-delayed sounds of large spaces from the right directions - multichannel sound is required.

The long-delayed reflections in recordings tend to perceptually dominate the local reflections in the listening room, an effect that is more convincing the more channels there are to convey the large-space information. This is so powerful an effect that when multiple channels are simultaneously active listeners are less aware of resonant colorations in loudspeakers. This is why monophonic evaluations are more revealing of loudspeaker problems. The fact that there are periods of hard-panned sounds in stereo and multichannel recordings give us opportunities to hear the "raw" loudspeakers, so mono evaluations are still the place to start. Loudspeakers that win mono tests also win stereo and multichannel tests, but the reverse is not necessarily true. Fortunately, nowadays it is possible to recognize timbrally neutral loudspeakers in a comprehensive set of anechoic measurements, such as the ANSI/CTA 2034 spinorama presentations that are increasingly available, thanks to Amir, Erin, and others.
 
This is true only if the complexity of the reverberation coming from at least 5, better 7 or more, sources, is not sufficient. If it is sufficient (and of the proper decorrelation and complexity) then you get the same sensation as the actual space.

Even with 5 microphones (See the PSR work from late 1990's from AT&T Research) this works pretty well.
Yes. The content I saw was probably related to David's explanation about reflections lacking complexity, as you mentioned. Searching through the PDFs of materials from you and many other excellent researchers and trying to recall what I've seen feels like looking for a needle in a haystack. I am always grateful for your contributions.

The principle factor is binaural hearing, which allows us to perceptually discriminate between the first arriving sound from one direction, and later arriving sounds from different directions. In stereo recordings, microphones don't capture this information in the form required by the brain - binaural, dummy head, recordings are needed. These days good binaural recordings reproduced through "neutral" headphones equipped with head-position tracking can be amazingly realistic. Stereo is not capable of delivering the necessary long-delayed sounds of large spaces from the right directions - multichannel sound is required.
I can't believe I received a reply from Toole! I don't think I'll be able to sleep tonight. Thank you for your thoughtful response. I must apologize as the brief comment I made (the one I intended to quote but can't remember) might have been somewhat ambiguous and led to some misunderstanding. I also fully agree with everything you've said (especially regarding the lack of spatiality in stereo). I enjoy binaural virtualization(personalized Custom IEM base, considering neutral speaker+space response and pinna coloration as well.), drawing from the materials of j_j, David, and yourself, Toole.
 
Yes. The content I saw was probably related to David's explanation about reflections lacking complexity, as you mentioned. Searching through the PDFs of materials from you and many other excellent researchers and trying to recall what I've seen feels like looking for a needle in a haystack. I am always grateful for your contributions.


I can't believe I received a reply from Toole! I don't think I'll be able to sleep tonight. Thank you for your thoughtful response. I must apologize as the brief comment I made (the one I intended to quote but can't remember) might have been somewhat ambiguous and led to some misunderstanding. I also fully agree with everything you've said (especially regarding the lack of spatiality in stereo). I enjoy binaural virtualization(personalized Custom IEM base, considering neutral speaker+space response and pinna coloration as well.), drawing from the materials of j_j, David, and yourself, Toole.
Relax, enjoy your sleep :). Related to this discussion are some facts that tend to be oversimplified. A truly diffuse sound is an academic concept. Even reverberation chambers struggle to approximate one, and when samples of absorbing material are introduced for measurement, whatever diffusion that existed is degraded. Concert halls have "relatively" diffuse sound fields, some more than others, and domestic listening rooms with typical reverb times of 0.5 s and less are incapable of supporting diffuse sound fields. In typical listening rooms the sound from typical forward firing loudspeakers arriving at a listener is absolutely dominated by a few early reflections, the direct sound being physically dominant only at the highest frequencies. We can closely predict room curves from anechoic data as proof. and it is not sound power (energy for a diffuse sound field) that dominates.

Perceptually, binaural hearing can "hear through" rooms to a substantial extent, attaching significant importance to the first arriving sound. It is a remarkable experience to hear synthesized spaces. In an anechoic room a direct sound followed by only a single reflection from a different direction, say 60 deg, is perceived as "spacious". In measurements a microphone tells us it is a horrible comb filter. More reflections of the right amplitude, timing and direction progressively build a recognizable room. A diffuse sound field is not necessary for the perception of envelopment - the sense of being in a large space - which audiophiles desperately seek.

As shown in my books, serious experiments show that the conventional 5-channel surround arrangement can closely approximate the illusions generated by a much larger number of channels - BUT only for a single listener. If the experience is to be shared more channels are necessary. Then one can walk around the room and the illusion does not disappear. Fortunately, most perfectionist audiophiles listen alone. The rest of the world, and me much of the time, is content to enjoy the musical content.
 
I would argue that "not definitive" is all that's necessary, no qualification is necessary.
Sorry JJ but In small rooms the resonances dominate bass quality, and it turns out that prominent resonances behave in a minimum-phase manner, meaning that the steady-state frequency response in that frequency range is a reliable indicator of the potential audibility of those resonances and of the frequency and Q of filters to attenuate them - but only for a single listener. Multiple subs can actively manipulate room modes, which is a more challenging, but reduces seat-to-seat variations. Todd Welti elaborates on this in the upcoming 4th edition, but the basics are in earlier editions.
 
Sorry JJ but In small rooms the resonances dominate bass quality, and it turns out that prominent resonances behave in a minimum-phase manner, meaning that the steady-state frequency response in that frequency range is a reliable indicator of the potential audibility of those resonances and of the frequency and Q of filters to attenuate them - but only for a single listener. Multiple subs can actively manipulate room modes, which is a more challenging, but reduces seat-to-seat variations. Todd Welti elaborates on this in the upcoming 4th edition, but the basics are in earlier editions.

Well, in fact, while you can't 'image' below 90Hz, you can get intense changes in envelopment. This absolutely requires more than one sub.
That's way, way below any transition frequency, and you can't claim "bass quality" until you make that listening room sound as "wide" as the church you started in.

Sorry, but we have to disagree, and there are discussions thereabouts this board detailing others' experiences with it.

Reducing seat to seat variations is only the start.
 
Relax, enjoy your sleep
Thank you for your kind response. :)

Perceptually, binaural hearing can "hear through" rooms to a substantial extent, attaching significant importance to the first arriving sound. It is a remarkable experience to hear synthesized spaces. In an anechoic room a direct sound followed by only a single reflection from a different direction, say 60 deg, is perceived as "spacious". In measurements a microphone tells us it is a horrible comb filter. More reflections of the right amplitude, timing and direction progressively build a recognizable room. A diffuse sound field is not necessary for the perception of envelopment - the sense of being in a large space - which audiophiles desperately seek.
Thanks again. Based on that, I have experimented with adjusting the number and angles of reflections, modifying the shape of the measured room impulse's ITDG or ETC, and intentionally applying slight decorrelation, among other things. It was quite an interesting test.

1744780674959.png


In particular, by using the direct sound from the front stereo speakers and the reflected sounds from the room, and referring to David's ideal reverb shape, I was able to observe the difference that occurs when modifying the ITDG/ETC, extending from the foreground to the background. While this is just a small test done casually as a hobbyist, it allowed me to understand and relate to the original purpose of multichannel systems and upmixers, reflections resulting from stereo and radiation characteristics, and the perceptual sensations based on time differences.

and then, in a clever separate test, trained listeners were asked to draw the spectrum of the "sound" they heard from various loudspeakers. They drew the direct sound spectra. All of this is in AES papers and summarized in my books. More in the upcoming 4th edition.
I am very excited about the results of such tests.


Reducing seat to seat variations is only the start.
I agree. And your words carry a similar nuance to a statement made by Thomas Lund, while also providing a powerful idea: "Disregard for inter-aural time domain coherency at low frequency. In case LF inter-aural time and magnitude differences have been recorded across channels, and made it safely through a reproduction chain, it is such a pity to kill Auditory Envelopment (AE) at the last stage, by using mono sub(s) with bookshelf/nearfield monitors. That’s game over before even started."
 
Well, in fact, while you can't 'image' below 90Hz, you can get intense changes in envelopment. This absolutely requires more than one sub.
That's way, way below any transition frequency, and you can't claim "bass quality" until you make that listening room sound as "wide" as the church you started in.

Sorry, but we have to disagree, and there are discussions thereabouts this board detailing others' experiences with it.

Reducing seat to seat variations is only the start.
I don't think we disagree, more like we just have different priorities. In decades of double-blind subjective evaluations it became very clear early on (1985/86) that bass extension, by itself, correlated with sound quality ratings. No real surprise, I suppose. Then, many years later, Sean Olive did his subjective/objective correlation study, and it revealed that bass extension and quality accounted for about 30% of the overall sound quality rating. These evaluations were of loudspeaker sound quality performance, and were therefore done in mono - no spatial component was in the source material.

Small rooms have resonances that modify bass sound quality in ways that are not subtle, and if one wishes to share the audio experience, as in home theater, the seat-to-seat variations experienced in the standing-wave patterns are a major problem. Equalization definitely helps because prominent standing waves behave as minimum-phase phenomena, but it cannot satisfy more than one listener with any certainty. Bass management became part of multichannel sound, which aided our ability to address the resonance/standing wave problem. The realization that multiple subwoofers could be used to manipulate standing waves in predictable ways was a powerful tool in attenuating both the seat-to-seat variations and the resonance problems because EQ then benefitted multiple listeners. This was progress on the sound quality front, but envelopment was not considered.

But, there were voices saying that "stereo bass" was lost, which is true. The next question is "how important is stereo bass?". Within Harman, David Griesinger was an advocate, arguing, correctly, that in concert halls the dimensions allow for "directionality" in long-wavelength, low-frequency sounds and that this could/would/might be a factor in perceived envelopment. The long wavelengths (20 ft/6 m at 50 Hz) compared to the spacing of the ears means that the effect is likely to be subtle compared to binaural effects at higher frequencies, but human hearing is good at detecting subtleties. We mounted a demonstration, set up by David, in which we listened to a variety of stereo vs mono subwoofer comparisons using a wide variety of music and digitally contrived signals that should have been good at revealing differences. This was done at several stereo-to-mono crossover frequencies, using different layouts, with the auditioning being done in a largish living room. It was definitely a serious effort.

Differences were heard, but they were quite subtle. Differences in sound quality were expected, because the excitation of room modes is quite different when two subs are operating in mono or stereo - i.e. receiving the same or different signals. These differences would depend on the stereo separation at bass frequencies in the program material (LPs don't qualify - all low bass is mono). But here we were making an effort to focus on differences in "space/ envelopment", to the extent that such a perceptual differentiation is possible. The differences we could report seemed to fade to insignificance at a crossover frequency of about 80 Hz, a figure supported by other investigations described in Section 8.4 of the 3rd edition, provocatively entitled " Stereo Bass: little ado about even less", with apologies to William Shakespeare, if indeed he wrote the words. The conclusion was that the necessary spatial information exists at frequencies above about 80 Hz, and therefore it is present in bass managed systems. It is a stereo "upper-bass" effect, not a stereo low-bass effect. There is also a subtle problem in A vs B comparisons - hearing a "difference" is not declaring superiority of one option - here we were content to hear a difference.

Is that a definitive statement? Probably not, but it strongly suggests that whatever potential there is for enhancing envelopment by capturing directional bass cues in large spaces and reproducing them in small rooms must be achieved while at the same time reducing seat-to-seat variations and room resonances, both of which are easily audible. Because listening rooms are not standardized any wavefront reconstruction exercise is clearly a custom listening room, multichannel audio system - expensive - solution. Present indications in the audio industry are that it is not likely to be commercially successful. But it is an interesting academic exercise.

Recordings, including classical recordings, are mixed and mastered in recording control and mastering (small) rooms, in an industry that seems proudly to ignore science and lacks even basic standardization. It seems to me that there is a "circle of confusion" problem to be added to the list of challenges facing listener satisfaction.
 
Last edited:
Mr. Toole, very much appreciate and enjoy your contributions to these discussions. Having read through my copy of your third edition several times already, I'm very much intrigued by your mentions of the upcoming 4th edition. If I may be so bold, could you give us an approximate availability for this version? I realize there is a portion of the effort out of your hands, but count me as being at the front of the line when it hits the bookstores.
 
But here we were making an effort to focus on differences in "space/ envelopment", to the extent that such a perceptual differentiation is possible. The differences we could report seemed to fade to insignificance at a crossover frequency of about 80 Hz, a figure supported by other investigations described in Section 8.4 of the 3rd edition, provocatively entitled " Stereo Bass: little ado about even less", with apologies to William Shakespeare, if indeed he wrote the words. The conclusion was that the necessary spatial information exists at frequencies above about 80 Hz, and therefore it is present in bass managed systems. It is a stereo "upper-bass" effect, not a stereo low-bass effect. There is also a subtle problem in A vs B comparisons - hearing a "difference" is not declaring superiority of one option - here we were content to hear a difference.

Question:

Many audiophiles report that adding subwoofers often aids the perception of a larger acoustic space being reproduced, if that is in the recording. So for instance a recording in a large hall in which the real hall acoustics play part in the recording. The idea is that the size information of the acoustic can be also found in surprisingly low frequencies.
So even if you’re listening to just a cello being played in a big hall recording, if you had subwoofers the sensation of the hall acoustic becomes more convincingly large or enveloping.

Given what you’ve posted above, if this is the case then it wouldn’t necessarily be due to stereo reproduction of those lower frequencies, but perhaps simply restoring them even in mono subwoofer frequencies?

So I guess it’s a general question about what has been studied in terms of low frequency contributions to the apparent scale of an acoustic space.

(I mentioned before I do sound design for movies and TV, so I am actually dealing with and creating “ room tones” of various sizes all the time. If I want to reduce the apparent size of a room tone, EQing out bass rumble is one of the first ways to do it… and essentially the more bass I take out the smaller the room sound generally becomes).
 
If I want to reduce the apparent size of a room tone, EQing out bass rumble is one of the first ways to do it… and essentially the more bass I take out the smaller the room sound generally becomes).
If you deliberately increase rumble (e.g. from another large space) into the recording of a small space, does the room apparently get bigger?
 
If you deliberately increase rumble (e.g. from another large space) into the recording of a small space, does the room apparently get bigger?

Depends on the spatial distribution of the "bigger room rumble". If it's very stationary as you move around, it just sounds muddy.
 
Back
Top Bottom