- Joined
- Apr 24, 2019
- Messages
- 1,501
- Likes
- 2,822
The 194dB theoretical limit is sound at sea level without clipping of waveform. It can go louder once you allow compression and clipping of waveform.
https://en.wikipedia.org/wiki/Shock_wave
The 194dB theoretical limit is sound at sea level without clipping of waveform. It can go louder once you allow compression and clipping of waveform.
Some things Rob Watts says are shocking but he does know a thing or two about designing good electronics.
The TI C6678 supports 64-bit floating-point and has 8 cores at 1.25 GHz. I should think it's fast enough to implement a volume control.I don't follow this much, but has Mr. Wattage ever proposed or revealed a hardware solution that produces these numbers? I don't know of any standard DSP chips that maintain full 64bit FP precision (~-300dB) input to output. This can be done with a CPU but I don't know if it can be done in real time at the highest sample rates.
The TI C6678 supports 64-bit floating-point and has 8 cores at 1.25 GHz. I should think it's fast enough to implement a volume control.
he performs this multiplication after upsampling to a higher rate and uses shaped dither to maintain a high dynamic range in the audio band, naturally at the expense of increased noise at higher frequencies. There is nothing magical or mysterious about this, and it can easily be done in software on a PC. […] As long as whatever noise is added by rounding/dither in the attenuation falls below this level, the details make no difference to the final analogue output.
Oh, but your puny software volume control isn't as "sophisticated" as the mighty Watts attenuator.Just tried to resample a 16/44.1k flac to 32 float/2822.4k, with volume control (Amplify plugin) after upsampling, with my super outdated i3-4160. Still have 15.38x real time speed using a single thread. Though I saved to a 8-bit wav file to reduce harddrive I/O bottleneck as it is not related to processor speed, otherwise bitrate would be as high as 180.6Mbps, good enough even for 8k video LOL.
Then use Miska'sOh, but your puny software volume control isn't as "sophisticated" as the mighty Watts attenuator.
Then use Miska's
Someone who sells software player with 80-bit float processing. Forget about it, don't want to promote his software.
Someone who sells software player with 80-bit float processing. Forget about it, don't want to promote his software.
Why care what he says when he presents no evidence that his hypothesis is audible?Apologies in advance for even more Rob Wattage,
does any of this make any sense,
Quote
,What can we say about timing errors?
If you would have asked me this a few years ago, I would have said uS accuracy was needed. Now I make no such assumption - there is perhaps no limit to how good the timing of transients need to be. So how can I substantiate that bold statement? Unlike noise shapers, it's rather difficult to put a number to timing accuracy. I guess I ought to state what I mean by transient timing accuracy. I do not mean - unlike the rest of the audio business - ringing performance; this is absolutely not what I am thinking about when I talk about the time domain or timing accuracy. Ringing uses an illegal signal from sampling theory POV as it is not bandwidth limited, so you would not actually get a perfect impulse from a perfect legal bandwidth limited ADC. So why worry about a signal you will never get? So it is actually pointless talking about it. What I mean is the accuracy of the timing of transients. Imagine a bandwidth limited analogue signal that is being sampled in the ADC - it is fully negative, goes positive and at some time crosses through zero. Let us say it is sampled at 44.1 kHz, so every 22,676 nS it's sampled. Let us imagine that the signal is sampled, and then crosses through zero at exactly 20,155 nS after sampling. Of course, when it gets sampled again at +22,676 nS it will now be a positive value. The question is, when the DAC reconstructs the sampled data - converting sampled data back to a continuous analogue signal - when will the signal cross thru zero? Theory is completely clear and undeniable - if we use an infinite oversampling FIR filter with a sinc response at 22,676 nS and a perfect DAC we will reconstruct the time it crosses thru zero absolutely perfectly at 20,155 nS. But with a finite non sinc function reconstruction filter, it will not cross thru at exactly 20,155 - maybe at 19,000 nS or 21,000 nS. And it is these differences in the timing of transients, are what I am talking about. Now in the past I would have said that getting it right to a uS was perhaps OK (timing errors can be as big as 100uS in conventional filters) - now I know that instead of worrying about uS we need to worry about getting it correct to nS's.
What is the evidence for that view? In designing Dave, I wanted to discover what I had done in the Hugo design (it was a happy accident) to give me the timing performance that I so enjoyed with it. By this I mean the ability to hear the stopping and starting of notes. After trying different things, I chased down this quality to the interpolation filters after the WTA filter. Now with Hugo, I used a 16FS WTA filter, followed by a linear interpolator and a two stage IIR filter filtering up to 2048 FS. Changing this to a 256 FS WTA filter followed by my usual 3 stage filtering gave a massive change in sound quality - at this point Dave was sounding impossibly rich and smooth and almost soft sounding. By changing it to 256FS WTA gave a substantial change in character - it was still smooth, but very fast and you could hear the starting and stopping much more easily. It went in character from soft and smooth to fast and sharp - when the occasion demanded.
Now replacing the WTA from 16FS (data every 1,417 nS) to 256 FS (data every 89 nS) is technically very small in the sense that transient accuracy using a WTA against an IIR filter at this speed is not a vast change in the time domain - it is a very subtle difference, but was nonetheless extremely audible. What it tells me is that very small - impossibly small - timing errors are very significant for the brain's ability to process the ear data."
Again, I admit I cannot comment on the technical aspect of what Rob Watts says and that is why I need your help to understand your claim that "There's no point providing timing accuracy at greater than 2x our max timing resolution ability and that currently stands at 0.05ms".
Never mind Rob , Almost turned up at your gaff looking for a bed for the night today .Just wondered if there was any shred of technical merit , ‘transient timing accuracy’ for example,I realise he has never substantiated his ’longer tap length = better sq’ argument.
Keith
Just wondered if there was any shred of technical merit , ‘transient timing accuracy’ for example,I realise he has never substantiated his ’longer tap length = better sq’ argument.
Keith