• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

I'm developing a new kind of reverb technology

Tomi Wai Lee

Member
Joined
Dec 8, 2024
Messages
5
Likes
15
Hey everyone, I thought this might be a good place to share this because you guys very much know the issues with a speaker's directivity and how it interacts with the room. This phenomenon is extremely important for convolution reverbs, which has been the most realistic reverb technique yet, but is yet overlooked by many, sometimes even the ones creating the reverb. There are a couple of plugins out there (East West Spaces, Spitfire Air Studio Reverb...) where the devs chose ATC monitors and their inconsistent off-axis response.

The issue with convolution reverb is that the sound of the speaker is all over the reverb. Sometimes it's okay, but when trying to do reverb for instruments with a largely different directivity than the one from a speaker, it doesn't sound natural.

So I began developing IRs with an advanced acoustical simulation technique known as Wave-Based simulation, so that I can recreate the reverb much more like how musical instruments create it. And to my ears, it sounds great ! Feel free to check out a little video presentation I made about it, I talk about how it works in the first half, and then I show some demos and compare to it to existing convolution reverbs :


I'm planning on releasing a plugin in the future, but I can make custom IRs right now for anyone that wants them : https://tomiwailee.com/wave-based-reverb

Tell me what you guys think about it !
 
A for effort, but methinks there is a lot of DRM issues in this video. Lots of clips and screenshots from all over the interweb. Hmmmm.....
 
Hey everyone, I thought this might be a good place to share this because you guys very much know the issues with a speaker's directivity and how it interacts with the room. This phenomenon is extremely important for convolution reverbs, which has been the most realistic reverb technique yet, but is yet overlooked by many, sometimes even the ones creating the reverb. There are a couple of plugins out there (East West Spaces, Spitfire Air Studio Reverb...) where the devs chose ATC monitors and their inconsistent off-axis response.

The issue with convolution reverb is that the sound of the speaker is all over the reverb. Sometimes it's okay, but when trying to do reverb for instruments with a largely different directivity than the one from a speaker, it doesn't sound natural.

So I began developing IRs with an advanced acoustical simulation technique known as Wave-Based simulation, so that I can recreate the reverb much more like how musical instruments create it. And to my ears, it sounds great ! Feel free to check out a little video presentation I made about it, I talk about how it works in the first half, and then I show some demos and compare to it to existing convolution reverbs :


I'm planning on releasing a plugin in the future, but I can make custom IRs right now for anyone that wants them : https://tomiwailee.com/wave-based-reverb

Tell me what you guys think about it !
Welcome to ASR and thank you for sharing information about your interesting project!

Out of curiosity: what is the minimum size of objects that your simulation takes into account? Objects like chairs or baroque sculptures typically found in prestigious concert halls, too?

From my limited knowledge about artificial reverberation: convolution reverb is not considered ideal for long reverberation times like that of large halls since it is static and lacks the fluctuation that does exist in real venues caused by movement of air. Do you add this fluctuation somehow?
 
Wow, I also give him an A for effort. Also, an A for putting together a great demo with very good visuals for what he is trying to show. I also get the idea he likes playing that keyboard. So, overall an A from me. I give the mosquito an A+ as he gave his life for the video, now that is dedication! I really enjoyed watching that video.:)
 
I have been contemplating this subject for quite some time, as I have been in the recording and broadcast industry for about 20 years actively, mostly Studio work and dialog for Television. Some work on location, recording for film.

Adding reverb after the fact, in other words, to a recorded source, is a purely creative decision.
One wants to enhance the recorded source, whether adding a purely imagined space representation or an approximation of a real space, as one has perceived it.
Nobody can have visited all rooms where sounds were and are produced and reverberate in, and not many people could reliably audibly identify even a handful of such spaces.

There is a reason certain equipment and devices for creating reverb remain in production, as the spring reverb tank(s) in the video of the post, or plates, gold-foils, or vintage units like the venerable EMT 251 - lovingly called R2D2, as well as Lexicon reverb units like the 224 or 480 are sought after and found in any studio worth its salt, and so are others of that quality. Not because they produce the most realistic representation of any known space, but because the produce the desired effect.

I feel as much as the OP's quest is laudable, it is putting the cart before the horse. Whatever your method of recording is, apart from recording the real instrument in the real space, you are producing artificial facsimiles of sounds, and you add artificial facsimiles of reverberation to it.
In other words, all such efforts, including even the recording of an actual instrument in an actual space, are always approximations and that's that.

A recording artist, and to an extent the recording engineer helping the process along, create in a sense a new reality, one that was not in existence before.
That is the beauty of it, and that is the reason so many talented and smart people develop new methods and refine existing ones. To create anew.

Undoubtedly the OP's quest will add a new way of adding value to a recording, and I wish him luck.
 
Welcome to ASR and thank you for sharing information about your interesting project!

Out of curiosity: what is the minimum size of objects that your simulation takes into account? Objects like chairs or baroque sculptures typically found in prestigious concert halls, too?

From my limited knowledge about artificial reverberation: convolution reverb is not considered ideal for long reverberation times like that of large halls since it is static and lacks the fluctuation that does exist in real venues caused by movement of air. Do you add this fluctuation somehow?
The space accuracy of the simulation is around 5mm. So it's pretty nice in this aspect, the problem is actually to design the 3d model this accurately.

For example, I simulated the presence of an audience when doing the Suntory hall renders. Adding a high-poly, acoustically accurate human model and then duplicating this 2000 times is just way too complicated, time-consuming, and it will be hard for me to check if the simulation is running correctly. So what I did is adding simplified shapes with adjusted acoustical properties (based on calculations and manual tuning by ear), so that they behave similarly to an audience. It's definitely a simplification in the simulation, but it still sounds great.
 
The space accuracy of the simulation is around 5mm. So it's pretty nice in this aspect, the problem is actually to design the 3d model this accurately.

For example, I simulated the presence of an audience when doing the Suntory hall renders. Adding a high-poly, acoustically accurate human model and then duplicating this 2000 times is just way too complicated, time-consuming, and it will be hard for me to check if the simulation is running correctly. So what I did is adding simplified shapes with adjusted acoustical properties (based on calculations and manual tuning by ear), so that they behave similarly to an audience. It's definitely a simplification in the simulation, but it still sounds great.
Got it. What about my second question concerning fluctuation in long reverb tails, any plans to add this?
 
Got it. What about my second question concerning fluctuation in long reverb tails, any plans to add this?
Oh yeah sorry I forgot to respond about that ! I think this type of fluctuation from thermal noise and air currents is quite minimal. I don't have the exact numbers, and to my knowledge it hasn't been quantified against human audibility thresholds. But when listening to IRs vs real speaker playback comparisons, I don't hear such effects.

What can be more impactful are the movements of musicians. From what I experimented, it's usually not relevant unless the microphone is quite close, and for now, I don't have plans to build IRs for close mics in my plugin. But if it ever needs to be done, it could be sort of reproduced as the early reflections can be fully separated from the diffused reverb tail, so it's possible to apply modulation only on the ERs without affecting the reverb tail, which would be closer to what happens in real life when a musician moves.
 
I hope you find a way to achieve your goal with your reverb program, it’s easy to hear the problems in the bass frequency region with the other plugins and what you want to avoid with your “wave-based” reverb.

The dream reverb program for me would be one where it is possible to position many individual instruments wherever I want them to be in the same acoustic space, and that it would sound as realistic as an real “ensemble recording”.

At this point in time, I have never seen/heard of a reverb program being able to do the above-mentioned dream, they usually just work fairly convincingly on single instruments but never for a bunch of individually recorded instruments that are meant to be but together as a convincing full ensemble. Something highly important is missing in how the reverberation would work in a real space, and how the sound waves would be spread differently from instruments positioned at separate positions in the room.

Dreams on… :)
 
Very interesting! My biggest gripe with IR impulses of real rooms was always exactly the issues you talk about. They always sound like a realistic rendition of a speaker in a room, not instrument. Potentially your way fixes those issues, so great!
If you need beta testers I am all for it, I work as a mixing and mastering engineer full time so I can give you some honest feedback from many different scenarios.
 
this type of fluctuation from thermal noise and air currents is quite minimal. I don't have the exact numbers
I‘m not an expert in reverberation algorithms, but I did dig into concert hall acoustics, and what I have learned there is the opposite: the fluctuations from air turbulence cause level differences in the excess of 10 dB at some frequencies, ie. they are easily audible. In fact, high-end reverb processors like Bricasti M7 or plugins like those from Liquid Sonics spend a lot of research on making the reverb tail sounding natural with the proper amount of fluctuation. Liquid Sonic even filed a patent on this which means something for a small company like them. What I mean is: would these companies really do that if „simple“ convolution was enough?

For small rooms like bedrooms or control rooms it may be considered negligible though, here I would agree that „simple“ convolution is enough.
 
I‘m not an expert in reverberation algorithms, but I did dig into concert hall acoustics, and what I have learned there is the opposite: the fluctuations from air turbulence cause level differences in the excess of 10 dB at some frequencies, ie. they are easily audible. In fact, high-end reverb processors like Bricasti M7 or plugins like those from Liquid Sonics spend a lot of research on making the reverb tail sounding natural with the proper amount of fluctuation. Liquid Sonic even filed a patent on this which means something for a small company like them. What I mean is: would these companies really do that if „simple“ convolution was enough?

For small rooms like bedrooms or control rooms it may be considered negligible though, here I would agree that „simple“ convolution is enough.
I have not heard about "level differences in the excess of 10dB" from these types of effects, I'll be glad to read about it if you can find back where you've seen that.

From my understanding, the Fusion IR technology from Liquidsonics is mainly for emulating hardware reverbs. These have been using modulation more as a way to deal with the lack of density of their feedback delay networks, than to emulate an physically accurate acoustical phenomenon. But this is not the only way to improve on sound. Some more recent algorithmic reverbs, such as Modsound Velvetverb, or apulSoft apVerb, have been using velvet-noise filters, which reduce the need for modulation as they increase echo density. Here are some quotes from this 2020 paper :

This paper proposes inserting velvet-noise filters at the input and output branches of an FDN to increase its echo density during the beginning of the impulse response. The sparseness of the impulse response is a known limitation of the FDN. This work shows that with the proposed VFDN an even faster growth in the echo density can be obtain than with the doubling of the number of delay lines in a conventional FDN.
Other ways to improve the FDN include introducing time-varying elements in the structure, such as modulated delay lines [20], allpass filters [3], or a time-varying feedback matrix [21, 22, 23]. Time-varying delay lines lead to imprecise control of the decay time, whereas an FDN with time-varying allpass filters is not guaranteed to be stable [23]. Since a time-varying feedback matrix is less likely to cause artifacts in the reverberation sound, this method has been found to improve the sound quality of the reverberation tail [23].

But anyway, similarly to Fusion IR, I plan to have different IRs for the early reflections and reverb tail, allowing for more advanced modulation. I'm gonna experiment with it to see how useful it is.
 
I have not heard about "level differences in the excess of 10dB" from these types of effects, I'll be glad to read about it if you can find back where you've seen that.
It’s from „Spaces speak, are you listening“ by Blesser and Salter.

deal with the lack of density of their feedback delay networks,
Not so sure concerning Fusion IR…if it was about the reflection density it would have been enough to just measure a single impulse response of M7 and convolve it. Instead they go at great lengths to take many snapshots over time and combine them in the so-called Fusion IR. Why would you do that if not for the fluctuation?

allowing for more advanced modulation
Several convolution reverb plugins feature a chorus effect with very slow modulation rate, sometimes even two. From what I know this chorus effect is there to add fluctuation to the reverberation tail, and I find it does it quite well.

All the best for your project!
 
many producers/engineers insist on unnatural reverbs cause that's the sound they/we are used to and therefore "they sound good".
It's funny, though, that you don't really hear the reverb anymore in modern productions. They are used very subtly, or else "it sounds old."
I think the main reason is that they actually sound bad hahaha
naaaah. give us realistic reverbs, and give us reverb back.
 
many producers/engineers insist on unnatural reverbs cause that's the sound they/we are used to and therefore "they sound good".
It's funny, though, that you don't really hear the reverb anymore in modern productions. They are used very subtly, or else "it sounds old."
I think the main reason is that they actually sound bad hahaha
naaaah. give us realistic reverbs, and give us reverb back.
For modern productions, I find that the reverb on the vocals of this song is absolutely incredible, instantly puts a vibe on the music :


And it's just Pro Tools' stock reverb. So yeah, it doesn't even have to be realistic or high-quality to sound good. I think you just gotta use it with character and taste, and not just to smooth out boring vocals.
 
Hey everyone, I thought this might be a good place to share this because you guys very much know the issues with a speaker's directivity and how it interacts with the room. This phenomenon is extremely important for convolution reverbs, which has been the most realistic reverb technique yet, but is yet overlooked by many, sometimes even the ones creating the reverb. There are a couple of plugins out there (East West Spaces, Spitfire Air Studio Reverb...) where the devs chose ATC monitors and their inconsistent off-axis response.

The issue with convolution reverb is that the sound of the speaker is all over the reverb. Sometimes it's okay, but when trying to do reverb for instruments with a largely different directivity than the one from a speaker, it doesn't sound natural.

So I began developing IRs with an advanced acoustical simulation technique known as Wave-Based simulation, so that I can recreate the reverb much more like how musical instruments create it. And to my ears, it sounds great ! Feel free to check out a little video presentation I made about it, I talk about how it works in the first half, and then I show some demos and compare to it to existing convolution reverbs :


I'm planning on releasing a plugin in the future, but I can make custom IRs right now for anyone that wants them : https://tomiwailee.com/wave-based-reverb

Tell me what you guys think about it !
following
 
For modern productions, I find that the reverb on the vocals of this song is absolutely incredible, instantly puts a vibe on the music :


And it's just Pro Tools' stock reverb. So yeah, it doesn't even have to be realistic or high-quality to sound good. I think you just gotta use it with character and taste, and not just to smooth out boring vocals.

Yeah, that's okay. Here, it is being used as an "effect." It contrasts sharply with the rest, which is rather dry. Of course, you don't need a natural-sounding reverb to create here since you are not simulating a realistic ambience.
 
Back
Top Bottom