This is the clearest definition of the difference between analog and digital signals. With these definitions it is perfectly logical to describe DC as an analog signal, even if it is at a predefined zero. it simply means that the amplitude of a DC signal does not vary with time.For N bits the number of values is 2^N, so for 8 bits you actually have 256 values. The number of steps is one less, e.g. 255 for 8 bits, since you do not include the "ground floor" (the range of values for an unsigned 8-bit converter is 0 to 255).
The definitions that I and others have used for decades:
Loosely, analog is a continuously-varying signal without discrete time or amplitude (voltage, current, power, displacement, whatever) values. Digital is numerical with values quantized in amplitude and time (thus chosen from a fixed, discrete set of values). Sampled-analog applies to things like a sample-and-hold (or track-and-hold) that quantizes in time but not in amplitude. Bucket-brigade delay lines and CCD imagers are sampled in time but not in amplitude. The delay line's output is usually applied to an analog amplifier, essentially staying sampled-analog in nature, whilst an imager's cells usually feed an ADC and so are quantized (digital) after conversion. A basic comparator without a latch (i.e. no clock) will quantize in amplitude but not in time -- it flips whenever the signal crosses a threshold.
- Analog = continuous in time and amplitude
- Sampled-analog = discrete time or amplitude, other parameter continuous
- Digital = discrete in time and amplitude
These are pretty fundamental definitions used by standards bodies like the IEEE, at least when I was involved with such things.
FWIWFM - Don
A sampled analog signal is distinct from a quantised signal. When the sampled analog signal is quantised it is assigned a specific value using an appropriate modulo counting system, where binary, octodecimal, decimal, and hexadecimal are commonly used examples. This leads to considering a digital binary signal as having only two quantisation levels. Where the sampled analog signal exceeds the amplitude assigned to the two quantisation levels, further digital binary signals are required to represent the this amplitude. For example an 8-bit Analog to Digital Converter has 256 possible output combinations using 8 separate digital binary output lines, where the signal on each of those lines is confined to one of two amplitudes. The output of the the ADC is a quantised signal representing the sampled analog signal at the input, it is not a digital binary signal but a combintion of 8 separate digital binary signals.
The distinction between the three distinct categories listed in the original post needs to be clearly understood as there seems to be a degree of misapprehension around quantisation and digital binary signals.
Note that there is no mention in the definitions of confining the analog, sampled analog or digital binary signal within any given parameter. The definition of the digital signal states clearly that it is discrete in time and amplitude. An octodecimal representation of a signal, for example, requires eight discrete levels in the discrete time period, whereas the implication in the original definition is that it is a digital binary signal being described. In the context of current usage, "digital" is automatically equated with "binary".
For what it is worth, the guiding principal of ASR is to assess the accuracy of a "black box" to reproduce a signal (in whatever medium) at the output, where the format of the input and output signals may be different.
Last edited: