• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Understanding HDMI and Potential for HDMI Cable Differences

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
I post this elsewhere but I am highlighting this in its own thread given the amount of misinformation on both sides of the fence on this topic:

----

Unfortunately he makes some serious technical errors in his argument:

"The other really great thing about a digital system like HDMI is that digital signals don't degrade. A digital system takes a signal, and reduces it to a series of bits - signals that can be interpreted as 1s and 0s. That series of bits is divided into bundles called packets. Each packet is transmitted with a checksum - an additional number that allows the receiver to check that it received the packet correctly. So for a given packet of information, you've either received it correctly, or you didn't. If you didn't, you request the sender to re-send it. So you either got it, or you didn't. There's no in-between. In terms of video quality, what that means is that the cable really doesn't matter very much. It's either getting the signal there, or it isn't. If the cable is really terrible, then it just won't work - you'll get gaps in the signal where the bad packets dropped out - which will produce a gap in the audio or video."

HDMI specification is confidential. You have to become a member to know what is in it. As such, a lot of folklore has been created around what it is, and isn't. The above is one of them. He is confusing HDMI with networking protocols. It does not work that way at all.

HDMI is a real-time stream of data. Most of that the time what it sends is the value of the video pixel to be displayed. Each piece of data arrives to be displayed. It is not part of a "packet" nor does it have any checksum. If the data comes across wrong, it gets displayed wrong or shows up as sparkles, hashes, etc. If we had checksums, the receiver could put up an error. We don't see that because there is no checksum for validity of data. Just about any value, right or wrong, could be the real deal from receivers point of view.

When HDMI gets to the end of a video line, it then switches to sending auxiliary data. One of those axillary data is audio. Audio does have a checksum because if you try to output screwed up audio data, you could produce DC or other serious static that could damage equipment, or make you go deaf. If the checksum indicates data corruption however, unlike most networking protocols, there is no retransmission. The sound will most likely mute and we go about our business.

In no case will the system try to re-capture lost data. The time for displaying that pixel or playing that snippet of sound has come and gone. The receiver has done with it what it can and has moved on. It can't go back in time and fix two frames back from what it is displaying now.

As Opus mentioned, he is also wrong about nature of transmission. Capturing data is one thing. Knowing when to output it is another. The latter is the timing for said audio/video samples. In the case of video there is no problem in that each pixel location is digital in nature in today's displays. So when told to light up pixel 105 as red, we know where it is regardless of whether the data for it came 0.1 pixel value sooner or later.

For audio, it is a different animal altogether. The receiver must extract the timing if the incoming data in order to create its clock for the DAC. Any vagaries in that calculation will cause jitter and lots of it in the case of HDMI. Here is a comparison of HDMI and S/PDIF input on the same AVR and hence DAC:

i-Rd4Tsrr.png


This is showing the *analog* output of the AVR DAC. Purple is S/PDIF. Yellow is HDMI. Identical digital data was sent to each input. Yet what came out of the DAC was much more distorted in the case of HDMI.

Now all of this said, it is not clear HDMI cables can influence this picture much. I ran some quick test and could not cause this output to change meaningfully when using a short cable versus a very long HDMI cable. But the possibility exists.

This got long so I should make it its own thread elsewhere :).
 

Head_Unit

Major Contributor
Forum Donor
Joined
Aug 27, 2018
Messages
1,340
Likes
688
This got long so I should make it its own thread elsewhere
...and did you? It's VERY interesting, got me to upgrade my membership, you can pinken your panthers.
- What was the test signal?
- Do you believe it's just jitter causing such poor result or ???
- Does this mean for 5.1 that basically we are screwed?
- Which leads to, is that typical for all AVRs?
- Is the result the same with/without video data sending?
 

walt99

Active Member
Forum Donor
Joined
Feb 13, 2021
Messages
160
Likes
198
Location
DFW
Does this mean we would be better off using the digital audio out (optical in my case) from the TV to the AVR?
 

MrPeabody

Addicted to Fun and Learning
Joined
Dec 19, 2020
Messages
657
Likes
942
Location
USA
HDMI built upon DVI-D, which was originally developed as a digital video interface for computer monitors with a primary requirement being backwards compatibility with VGA analog video. DVI-D used a differential signaling scheme known as TMDS, wherein each byte included two extra bits, 10 vs. 8. This served two purposes, one having to do with minimizing DC bias, the other having to do with improving the ability for the local clock (in the receiver) to recognize the individual bits.

A basic assumption in DVI-D was that there should be three physically separate TMDS channels, corresponding to the three color components (originally RGB). A fourth physical channel provided clocking. The clock rate was (not certain if this is still true in all cases) the same as the pixel rate as determined by the video resolution. Since individual bytes in the video signal correspond to individual pixels, this meant that the HDMI clock provided local clock synchronization once each byte period, or once every ten bits. In more recent versions of HDMI, a higher byte rate is implied by support for higher pixel resolution and frame refresh rates, which in turn means that the clock frequency has increased. (I'm not certain whether it remains true in all cases that the clock rate corresponds to the pixel rate.)

In DVI-D there was no provision for audio, because computer monitors generally did not have any audio capability. Provision for audio was added by HDMI.

For carriage of both audio and auxiliary data, HDMI took advantage of the short vacant time spaces ("blanking periods") between successive display lines and between successive frames. The three TMDS video channels are time-multiplexed into three segments corresponding to video, audio and auxiliary data. Manifestly, the allocation of time space and information space to these three sub-channels is not even. Each of these sub-channels uses its own specific packet type. For the video sub-channel, real-time streaming of pixels is a defining characteristic.

Given that the audio is transmitted in short little bursts separated by much wider time gaps, it is manifest that the audio samples are buffered by the receiver and are delivered to the audio DAC under timing control of a local clock. Thus, we now have two local clocks, one used for bit detection, and another used for controlling the audio DAC. One of these local clocks (the one used for bit detection) is certainly synchronized to the HDMI clock periodically, at a rate 1/10 as great as the bit rate. It does not seem likely to me that the clock used for controlling the audio DAC would have direct synchronization with the clock used for bit detection. It is however necessary for the local clock used to control the audio DAC to be periodically synchronized to the HDMI clock, in order to prevent gradual drift of the audio from the video. I do not know whether the HDMI spec says anything about how this synchronization should occur. This would likely be left up to the implementation, just as it is in the MPEG spec, which defines the clock and timing information that the receiver will need but appropriately assumes that the designers of the receiver will be sufficiently competent to figure out what needs to be done and how to do it.

I do not think it is likely, in a typical case, that the synchronization of the local clock used to control the DAC would be done in a way that would implement a fine correction on an ongoing, pulse-by-pulse basis. Perhaps this is something that would be encountered with costly professional equipment, but I doubt that this kind of thing would be encountered with consumer equipment. I think it more likely that periodic synchronization of the local clock used to control the DAC would lead to occasional interruption or stutter in the analog output of the audio DAC. I don't know whether this would be considered jitter. Except for when these occasional stutters occur, it does not seem to me that jitter should occur to any significant extent, because it seems to me that aside from the occasional stutter, jitter would only occur if the local clock were not steady. (Please note that these comments are only applicable to HDMI and similar scenarios where the digital audio is buffered and the DAC is controlled by a local clock, and not to cases where a DAC is controlled directly by a clock signal transmitted along with the digital audio.)

As for error detection/recovery, it is of course true that retransmission as a means of error recovery is not supported in HDMI, which has nothing remotely similar to this capability. However the packets in the audio sub-channel include a type of cyclic redundancy check (CRC), which is like a simple checksum but far better at detecting data errors. The CRC also facilitates limited error correction, thus providing a limited means of error recovery.
 

voltronic

Member
Joined
May 19, 2021
Messages
37
Likes
21
In recent years, I have bought all of my HDMI cables from Blue Jeans Cable. The company in general is very no-nonsense, and they tend to back up their claims with measurements rather than promises. I am sure many members here are familiar with them.

All of my HDMI runs at home are now Belden Series-FE. Zero issues, and I am reassured that they are superior to the cheap throw-in cables or audiophile ridiculousness out there.

Check out BJC's Articles Page for a lot of interesting discussion on HDMI and digital signal transmission in general.
 

peufeu

Member
Joined
May 8, 2021
Messages
17
Likes
16
I do not think it is likely, in a typical case, that the synchronization of the local clock used to control the DAC would be done in a way that would implement a fine correction on an ongoing, pulse-by-pulse basis.

Basically the HDMI receiver chip outputs an audio clock that it extracts from the incoming multiplexed signal.

How it does that depends on the chip:

I've seen one case where they did it ghetto style, with a fractional divider from the video clock, which gives a pp jitter of one period of video clock plus whatever jitter is on that.

Others use a PLL. But it's an afterthought PLL inside a huge noisy LSI chip. For example SiI9127A/SiI1127A HDMI Receiver with Deep Color Output specs 0.1UI jitter on its SPDIF output, which is abysmally bad. It's probably the same on its I2S output.

Obviously in both cases the jitter from the video source will be passed through.

Amir's spectrum shot at the beginning at the topic looks legit... actually it's not that bad, much better than the one receiver I measured...
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
Does this mean we would be better off using the digital audio out (optical in my case) from the TV to the AVR?
I test for this in AVRs and report on it usually. In general, products based on ESS DACs use the internal resampler in it and don't have these issues anymore. I do see interference from time to time though so it is important to measure and know for sure HDMI is performant.
 

MrPeabody

Addicted to Fun and Learning
Joined
Dec 19, 2020
Messages
657
Likes
942
Location
USA
Basically the HDMI receiver chip outputs an audio clock that it extracts from the incoming multiplexed signal.

How it does that depends on the chip:

I've seen one case where they did it ghetto style, with a fractional divider from the video clock, which gives a pp jitter of one period of video clock plus whatever jitter is on that.

Others use a PLL. But it's an afterthought PLL inside a huge noisy LSI chip. For example SiI9127A/SiI1127A HDMI Receiver with Deep Color Output specs 0.1UI jitter on its SPDIF output, which is abysmally bad. It's probably the same on its I2S output.

Obviously in both cases the jitter from the video source will be passed through.

Amir's spectrum shot at the beginning at the topic looks legit... actually it's not that bad, much better than the one receiver I measured...

Off the top of my noggin it seems to me that the clock signal output by the HDMI receiver chip is for the purpose of controlling the screen refresh but probably not used to control the audio DAC in any very direct way. But I'm not certain, and in a moment I'll explain why. The reason I say "probably not" is that in general it probably is not true that the audio sampling frequency will be related to the video pixel frequency by way of any simple integer ratio.

The HDMI clock provides synchronization, at a rate 1/10 the bit rate, for the local oscillator in control of bit detection. Since these bits include the bits that carry the audio, the rate at which audio samples are written to the audio buffer memory is determined in a fairly direct way by the HDMI clock. But the manner in which the audio is written to the buffer and the way it is read out are most likely very different. If it happens to be correct to assume that the audio sampling frequency is not related to the video pixel frequency by way of a simple integer ratio, then it isn't apparent how the HDMI clock would be useful in generating the clock signal needed to control the audio DAC. This suggests that the clock signal used to control the DAC requires a completely separate clock signal that is generated from a local oscillator that has no direct synchronization to the HDMI clock. It is however apparent that the readout from the audio buffer has to somehow be synched to the video frame rate every so often.

Now the reason, that I said that I'm not certain that the audio sampling frequency isn't related to the video pixel rate by way of an integer ratio, has to do with the historical reason that the CD sampling frequency is 44.1 kHz. Video recorders were used to record digital audio in the early days, and video recorders were designed around the clock rates that are fundamentally pertinent to some video standard. In order to use a video recorder to record digital audio, the audio sampling frequency needed to be suitably related to the horizontal line rate of the video recorder. For both 525/60 and 625/50 video standards, 44.1 kHz is three times greater than the line rate, for interlaced video and taking the actual line rate. For 525/60, there are 490 actual lines (the other 35 were taken away in the transition from B&W to color), which means 245 lines per interlaced field, and the field rate is 60 Hz. Thus, the line rate is 245 x 60 = 14,700, and if you multiply this by 3 you get 44.1 kHz. For 625/50 the number of lines per interlaced field is 294. The field rate is 50 Hz, and 294 x 50 x 3 = 44.1 kHz. Thus, the 44.1 x 10^3 audio samples per second were recorded by recording three samples per line. With 32 bits per each 2-channel audio sample, if the recorded signal is interpreted as video (fed to a video monitor), each line is spatially divided into 32 segments; each such segment is either black or white.

Anyway, the point is that it seems possible that in some cases at least, the audio sampling frequency may be related to the HDMI clock frequency in a manner whereby the HDMI clock would be usable in a nearly direct way to control the audio DAC. I'm not certain though, and even if this is true for some particular combinations of audio sampling frequency and video pixel rate, I doubt whether it would be true for all combinations of audio sampling frequency and video pixel rate. As such, I'm inclined to suppose that the audio DAC is controlled by a local clock that for intents and purposes is fully separate from the HDMI clock, notwithstanding that periodic resynchronization is needed to correct for drift between the audio and video.
 

peufeu

Member
Joined
May 8, 2021
Messages
17
Likes
16
It's simple:

- The AV source tells the AV sink what the audio sample rate is.
- This is expressed as a fractional multiple of the video clock.
- The sink can use any suitable implementation to synthetize that audio clock, as long as its frequency is what it should be and it is synchronized to the video clock (otherwise audio and video will lose sync).

Check this document

So yeah the HDMI chip will buffer data from the audio packets in a FIFO and stream the data out over I2S using the audio clock, no problem, FIFOs are a nice tool to cross clock domains.

> This suggests that the clock signal used to control the DAC requires a completely separate clock signal that is generated from a local oscillator that has no direct synchronization to the HDMI clock

It needs to be able to play video continuously for an unlimited time without getting audio underruns/overruns or losing sync with video. So the audio clock is synchronized to the video clock.

Note that "synchronized" doesn't mean an integer ratio. It's a fractional ratio. But the audio clock is derived from the video clock, so they don't drift relative to each other, which would result in loss of sync.

So, if the chip is really broke and ran out of budget, it will run the video clock through a fractional divider. Here's an example: You got a 10 mhz clock (period 100ns) and you want a 4.5 Mhz clock. How do you divide by 4.5? Easy! Divide by 5 during a cycle, and then divide by 4 during the next cycle, and then start again. The output period is constant on average... of course it varies quite a lot from one cycle to the next, which results in humongous jitter, but that's a really cheap solution. For example the ONKYO TX NR 905 has a HDMI chip that works like this, uses a fractional divider to generate the audio clock, and tada! straight to the DACs. As you would expect, performance is garbage.

To do it right, a fractional-N PLL is a much better solution. That's basically a normal PLL where the divider values are modulated with a sigma delta modulator, so it'll give you fractional frequency multiplication and division. When implemented correctly, it works pretty nicely.

> 'm inclined to suppose that the audio DAC is controlled by a local clock that for intents and purposes is fully separate from the HDMI clock, notwithstanding that periodic resynchronization is needed to correct for drift between the audio and video.

That's what the WM8805 SPDIF receiver does. For example you give it a clean 50MHz clock, and it will output a rather clean 22.5792MHz clock (or any other frequency) for your DAC. You can use I2C to play with the registers and set the fractional PLL parameters to get any frequency you want. Or you can feed it to SPDIF and set it to "auto" mode, in which case it will do what you just said, which is update its frequency once in a while to stay loosely locked with the source while ignoring source jitter.

But a big noisy chip like an HDMI receiver is not a welcoming environment for a PLL.
 
Last edited:
Top Bottom