Say you have a signal that is 16 bits in amplitude, 2^16 = 65,536 different levels. A continuous analog signal has no steps at all (until the quantum level), but if you convert (quantize) that signal to sixteen bits, then you have 65,536 steps 1 unit (lsb) tall. Thinking of just one step going from 0 to 1, the signal can take on any value between 0 and 1, but when you convert it to a digital number you get 0 or 1 and nothing in between. The difference is the quantization (conversion) error. If the actual signal is 0.5, then you have an error of 0.5 between the real signal and what the converter says is the signal. For this example, the steps are all of value 1, so (0,1,2,3,4... 65,535). If the actual signal level is 3.141592654... (pi) then the error would be 0.141592654..., the difference to the nearest threshold (3). That error is (becomes) the quantization noise.

HTH - Don