Hi kchap thanks for watching my videos. I am not skilled at mathematics and engineering (though I suppose I do the latter with computers). I am the boy pointing out the King has no clothes.Is 32 bit FP a technological cul de'sac?
OK, I watched a couple of YouTube videos and read a few web pages. My mathematics and engineering are very basic but I can accept that stacking ADCs can give an additional 2 or 3 bits of resolution and there may be advantages in using 32 bit FP.
Imagine a few years time when it is possible to build an ADC with 23 bits of resolution and by using stacking in conjunction with SOTA amp design, the resolution ban be extened to 25~26 bits. At that point you need to go to 32 bit integers or 64 bit FP.
When you talk about an additional 2 or 3 bits of resolution would you want to save those two bits as those precise bits, or as a mathematical formula N^e which could potentially calculate to 2.11111111_?
Indeed, why did they go to 32-bit float instead of 32-bit fixed (integers) (which is what the electronics manufacturers do internally on the chips--duh )? Would you do that? I can't think of any engineer who would. I am waiting for any engineer here to explain why there is a benefit of 32-bit float over 32-bit fixed when digitizing an analog current. Why don't they answer, categorically? Am I wrong and they are being polite? Do they just not know? Are they unsure? There's nothing secret about how 32-bit fixed and 32-bit float works. The latter is complicated, but fundamentally simple if you ignore the bit-math and focus on what it does mechanically, so to speak.
I suspect there is too much of thinking in "waves", "harmonic distortion", "high frequency", etc, which are human constructs and have nothing to do with analog voltages/currents. In any point of time you try to count the properties of electricity. You can only discern the above properties, like "waves" if you "look at them" as a group. No electron knows it's part of a wave or noise, whether it came from the microphone or a neutrino hitting it from a gazillion light years away. All that "frequency" talk has absolutely nothing to do with measuring a charge at at 1/48,000th of a second.
Sure, you can use mathematically/digital methods to reconstruct what you believe the analog value should have been, you can "re-sample" etc., but doing that is a distortion--even if it is, in the end, pleasing to the ear. Shouldn't we keep separated what is real and what is artificial?
It's sad so few will confirm (or correct) that but stand idly by
Last edited: