Miguelón
Major Contributor
This has no sense, with two bits you cannot record anything.Yes you are wrong.
Higher bit depth gives you less quantisation noise. that is all. This video might help.. I love an excuse for Monty.
A single sinusoid will be only drawn on one single state of intensity, add a second one and it will clip.
Take 10 instruments, each with their own harmonics, which has their own intensity, each instrument also their own intensity at any sample time.
Assuming you can take at least 10 harmonics of each of those instrument, and you can vary the intensity on, for example, 10 different levels for each of the individual components of the Fourier space, you should need at least 1000 levels of signal encoding to reconstruct back the instruments and harmonics from the space. This is around 11 bits.
The main reason you use 16 bits is allowing 65.536 levels of intensity to define you time-intensity space.
If your intensity variable has no accuracy to quantize your intensity function, you have a bad recording. The adequate number was found to be at least, but not less, those 65.000 levels.
You can realize that watching a TV with only 4000 color space, then a 65000 one, and even you’ll notice the difference between the standard 16.000.000 levels.
Same in audio, you need bit depth in the same sense you need bit depth on the RGB space.