• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Audiophile Ethernet Cat8 Cable

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,408
Location
Seattle Area, USA
Audiostream has an ad for a contest to "win a Wireworld Starlight Cat8 Ethernet Cable ($210.00 Retail Value)."

This immediate set off 'WTF' bells, given:

1. A 1000' spool (unterminated) of Cat 8 can be bought on Amazon for $800. This is sufficient to make ~300 1 meter pairs. For $207 you can get 250 feet.

2. Cat8 is already rated for 10 gigabit speeds. I don't know of any consumer-focused networking gear with ports that fast; most tend to top out at 1 gigabit.

So what the heck could Wireworld say to differentiate its product from a bulk generic cable that has more than enough bandwidth do do anything one would need in the media consumer world?

Well, to quote their site:

"The first of its kind, the Starlight Ethernet utilizes an innovative new conductor geometry that supports higher transmission speeds for the most lifelike reproduction of streamed music and video. Patent pending Tite-Shield™ Technology improves the most critical parameters of digital signal transmission thereby increasing speed and fidelity over standard ethernet cable design."

So this Starlight cable is going to make my home gigabit ethernet go faster?

And that's going to somehow make Netflix and Tidal internet feeds faster, too?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
That's actually pretty cheap for "high-end" cables of any sort!

I love this bit in their press release:

"Category 7 cabling was created to satisfy the demands of 10 Gigabit Ethernet. According to Wireworld, “Even though most media networks now run below that speed, cables that support higher speeds have been found to improve the quality of audio and video streaming. Those improvements are possible because streamed signals suffer from data errors that cannot be repaired by the error correction systems that preserve file transfers. The proposed standard for future networks is Category 8, which extends network speeds to the staggering rate of 40 Gigabits per second.”


It is one thing to make subjective claims about fidelity that can't be verified easily. But it is completely another matter to say stuff like this that can easily be shown to be false:

1. Data error statistics are captured by the Ethernet interface and operating system above it. If there are such errors, it would be trivial to show those statistics.

2. Ethernet layer has forward error correction meaning extra bits are added to the frame to enable it to detect if frames are corrupted. This picture from Wiki shows it well ("FCS" field):
upload_2016-12-11_9-5-8.png


3. The Ethernet physical layer does not do any error correction but layers above more certainly do. Any file transfer that uses HTTP protocol for example sits on top of TCP whose main role is to achieve reliable data transmission. But again, if it fails it will keep track of it.

4. Data losses at network level happen at packet level. A packet is simply a chunk of data. In the case of data the most common packet size is about 1500 bytes. If you lose 1500 bytes that is a ton of audio data. The result will be a highly noticeable glitch or pop. It will not be any kind of fidelity loss.

5. Streaming protocols on the Internet may use UDP protocol instead of TCP. Unlike TCP, UDP will not perform retransmissions to capture lost data. But the streaming application above it will attempt to do that if it is the right thing to do (i.e. the data can be arrived before it is needed to be output). If the data is lost, there are mitigations such as stopping to "buffer," or muting the sound, dropping video frames, etc. Again it is not a fidelity loss in the classical sense.

6. Ethernet is rated at a distance of 100 meters/330 feet. In a patch cable sized connections, data losses simply do not occur. The signal is too strong over the short distance there to lose data. But again, any such error is tracked and can easily be displayed.

7. As noted by watchnerd, the data rates for audio (and even video) streaming is ridiculously low. CD quality audio runs at just 1.4 mbit/sec. Compare that to 100 mbit/sec audio for obsolete Ethernet connection or 1000 mbit/sec for gigabit Ethernet that is so common today and you see that it does nothing to tax the link. Such transfers run just fine on cat5 cables and are in no need of higher and higher speed cables.

I can go on but the net is that they simply do not understand how the system works and clearly have not done any measurements to verify their assumptions about its operation.
 
OP
watchnerd

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,408
Location
Seattle Area, USA
That's actually pretty cheap for "high-end" cables of any sort!

I love this bit in their press release:

"Category 7 cabling was created to satisfy the demands of 10 Gigabit Ethernet. According to Wireworld, “Even though most media networks now run below that speed, cables that support higher speeds have been found to improve the quality of audio and video streaming. Those improvements are possible because streamed signals suffer from data errors that cannot be repaired by the error correction systems that preserve file transfers. The proposed standard for future networks is Category 8, which extends network speeds to the staggering rate of 40 Gigabits per second.”

[snip]

I can go on but the net is that they simply do not understand how the system works and clearly have not done any measurements to verify their assumptions about its operation.

Plus:

1. It has the same logical fallacy "last mile" problem as power cord tweaks. If these data errors are truly unrepairable, then most of it has already happened on "the internet" side of my router fence, and thus is beyond my control, anyway...so what good would better home cables do?

2. Have they never heard of buffering?

3. I'm not sure this issue is lack of understanding on their part. It's that conveying info on how the system really works doesn't justify 70x prices. So you have to invent new "theories".
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,298
Location
uk, taunton
Plus:

1. It has the same logical fallacy "last mile" problem as power cord tweaks. If these data errors are truly unrepairable, then most of it has already happened on "the internet" side of my router fence, and thus is beyond my control, anyway...so what good would better home cables do?

2. Have they never heard of buffering?

3. I'm not sure this issue is lack of understanding on their part. It's that conveying info on how the system really works doesn't justify 70x prices. So you have to invent new "theories".
Basically con men..

It's like sports drinks, you invent a disease ( dehydration) then sell the cure ( Gatorade) where in reality your best off just drinking some water when your thirsty.

In domestic Audio this con job method is common. It's a epidemic
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,522
Likes
37,050
Honestly if your not using this cable you have no chance of understanding the plot twists on westworld...

As for music, might as well listen on a gramophone..

Is Wireworld where the 3rd season takes place on Westworld? If so I expect the plot twists to really be strange. Arnold must discover that host sentience is only possible with one type of internal wiring. Yet there's no scientific reason for this.
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,298
Location
uk, taunton
Is Wireworld where the 3rd season takes place on Westworld? If so I expect the plot twists to really be strange. Arnold must discover that host sentience is only possible with one type of internal wiring. Yet there's no scientific reason for this.
In season four Arnold reaches his zenith after upgrading with entreq grounding products ... He's kinda restricted to a chair now but says he's 'never been happier ' ... Bless
 
OP
watchnerd

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,408
Location
Seattle Area, USA
In season four Arnold reaches his zenith after upgrading with entreq grounding products ... He's kinda restricted to a chair now but says he's 'never been happier ' ... Bless

Actually, the player piano is in charge of everything, if you pay attention to the songs it plays...
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,522
Likes
37,050
Actually, the player piano is in charge of everything, if you pay attention to the songs it plays...
Brings up a good question. Player pianos in those days didn't play unless someone pumped a vacuum on them. How does the one in Westworld play and is it electrical or electronic and is it using Wireworld Wire if it is? Or is Arnold the ghost in the machine of the player piano?
 

Phelonious Ponk

Addicted to Fun and Learning
Joined
Feb 26, 2016
Messages
859
Likes
215
Brings up a good question. Player pianos in those days didn't play unless someone pumped a vacuum on them. How does the one in Westworld play and is it electrical or electronic and is it using Wireworld Wire if it is? Or is Arnold the ghost in the machine of the player piano?

Like everything on Westworld, it runs on the heat coming off of Delores.

Tim
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
Here is a bit of data point on error rate on home networks. My 48-port dlink network switch runs my whole house has a web interface (it is a "smart switch") and keeps statistics on errors for each port. I just ran it after it being up for 58 days and there is not one single transmit or receive error. This is a page of it:

upload_2016-12-13_8-23-45.png


Take a look at port 44 that I have highlighted: 761,309,868 packets (chunks) of data has been transmitted with no errors. Likewise, that port has received 3,833,536,602 or nearly four billion packets with absolutely no errors.

Anyone who thinks there are network errors in their home that these cables fix is really making the wrong assumption. Smart switches like above are pretty cheap and can easily be used to gather data like I am showing.
 
Last edited:

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Whether there are errors or not is irrelevant. What counts is how much circuit activity is required to correct the errors - a poor CD transport trying to read a dodgy disk is creating an "electrical storm" in the circuits, reading the media and fixing the errors - all the digital circuitry following is 100% happy with the corrected and error free data, but analogue parts in the region "hear" noise blasting away, and if not engineered robustly enough cause a subtle degradation of quality of the sound. That's what happening with a huge array of these tweaks and weirdo things, from personal experience this is where close attention to detail matters ...
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
WALOB.
Do you have any actual measurements of the difference in noise (on the power and ground buses etc) between reading a good vs bad disk? Do the measurements cover enough different mechanisms to be able to come to a general conclusion?
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
So you're suggesting that on a typical consumer grade CD transport that the spectrum of noise measured in different parts of the component will always be identical, irrespective? That the engineers have done such a superb job that artifacts will be impossible to capture?

A general conclusion is that poor quality engineering and cheap components create poor sound - otherwise, bargain items at Wal Mart, etc, would sound superb. So, these are "toys ..., and other, "proper" items don't have these problems? Now, where's the magic line where one crosses over between one and the other - I don't see the sticker on the box which says, "Real quality!" on it.

One can do some experiments and realise that there is a continuum: at one end, the system sounds terrible, at the other superb. Trouble is, there are setups dotted all along that line, continuously filling all the space. And the idea is, to push one's own rig as far as possible towards the "good" end - I've found that there are no magic companies and moneys spent that guarantee a certain position on that continuum ...
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
So that's a "no" to having any evidence, then.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
So you're suggesting that on a typical consumer grade CD transport that the spectrum of noise measured in different parts of the component will always be identical, irrespective?
Do you listen to those different parts of the CD player or just what comes out of its audio connection in the back?
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Evidence that less than stellar engineering can cause variations in behaviours of circuitry? - not personally, but I suspect that one or two people have gathered some numbers ...

In the airline industry and similar they talk of causal chains ... a loose bolt somewhere ultimately causes a plane to crash, and hundreds of people die. Should someone gather evidence that a loose bolt may cause accidents, therefore? In audio, the end result may be sound that is so so - no ears were damaged in the "accident" - but causal chains are just as important, evidence or not.
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
Something I saved some time back (not written by me):

ffracer said:
Interesting. What would be a bad case of jitter (amount of jitter in picoseconds) on a CD transport vs. the graphs? That would really help answer the question.

That's the squillion dollar question! What we do know is that it can be shown mathematically that you need less than 121 picoseconds of random sampling clock jitter at 20 kHz to ensure FULL 16-bit performance. That requirement drops to 0.474 ps for a 24-bit system. Remember, we are talking about random jitter, and an input frequency of 20 kHz, so there is not a degradation over the entire audio band. The first question becomes: is this even audible? Published papers tend to suggest that the answer is: NO. Using random clock jitter, the jitter level has to be MUCH higher before degradation is audible -- this random jitter is at the nanosecond and higher level level according to these publications. However, many ADC/DAC designers believe that sinusoidal jitter is (a) more likely in practice, and (b) more likely to be audible. Of course, one still has to take the frequency and level of the tonal jitter into account along with masking effects to determine whether it will be audible, or not. The level of sinusoidal sampling clock jitter that audibly degrades a sustained piano note, say, may be very different to what is needed to audibly degrade an orchestra/band playing more complex sounds. (It should be obvious that you need your best [lowest phase noise] clock at the ADC, since any sampling jitter due to the clock gets 'baked into the cake' during A-to-D conversion.)

Now, what does this have to do with the audible differences between various CD pressings? IMO, not a lot! Jitter often gets mentioned with regard to differences between pressings. However, the jitter due to inaccuracies in the pit/land structure on a CD is part of what is called transmission jitter. This is quite different to jitter on the ADC/DAC clocks. When CD was designed, all aspects were fully characterized, which led to its forward error correction schemes, channel coding, etc., etc. One of the key design goals was that it should be cheap/easy to duplicate. This is not possible if you have a very strict tolerance on the length of the pit/land structures (and other disc parameters). So, the channel code and decoding system were chosen to allow some 'slop'. In fact, the Red Book specifies a maximum disc jitter level of 35 nanoseconds! This is orders of magnitude larger than the levels we were talking about above so it must be audible, yes? Well, no!

As I'm sure you and Dr. crap... know, NONE of the data on a CD makes it to the DAC! Let me repeat: NONE of the data on a CD makes it to the DAC. Due to sample offsetting, left/right channel interleaving, convolutional data interleaving, Eight-to-Fourteen-Modulation (EFM), and the use of DC-free run-length limited coding, a CD player has to read a large amount of serial data, deserialize it, correct it (if needed), unmap the EFM, and undo the interleaving/offsetting to arrive at the parallel 16-bit left/right audio sample values that need to be presented to a DAC. All this is done in DSP and buffer memory. Now, what about the pit/land jitter? It is ALL removed in the data slicing process. The light reflected back from the 3T to 11T pit/land run-lengths gives rise to an AC signal at the output of the photosensor. Due to the properties of the coding used, this AC signal is sliced, and a determination made about the bit values. At this point, one of two things can happen: the values are determined correctly, or not. However, their value is 'set' and they are loaded into DSP/buffers until you have enough data to do error detection/correction, de-interleaving, etc. That is, the audio sample values are computed anew within the player. The transmission jitter is completely removed. If it is large enough, it will cause bit errors, of course. If there are only a few, they will be corrected, if they are constant, playback will fail.

At this point someone will make the case for coupling the DAC clock to the jittery clock recovered from the disc, or noise from the laser servos finding a way into the output stage, etc., etc. Of course, there are bad ways to implement things, but the presence of these issues is now well known, and the better players mitigate against them. As an example, consider the Sony SCD-1 from 1999. I chose this player because Sony produced a fairly detailed document on its design, and their methods of dealing with these issues, which can be found here:

http://www.docs.sony.com/release/SCD1_TWP.pdf

The player has also been analyzed in depth here:

http://www.stereophile.com/hirezplayers/180/index.html

and if one looks at its jitter spectrum:

http://www.stereophile.com/content/sony-scd-1-super-audio-cdcd-player-measurements-cd-player-3

one can see that ALL the test signal data-related sidebands are below -120 dB. This level of performance is not unique to the SCD-1, but shows what is achievable in a well-designed system.

In the early days of SACD development, one of the Philips disc technologists who was busy making the hybrid disc a reality came to see me (I was at Philips Research at the time). He had read about audible differences between CD pressings, and wondered if there was something in it. His idea was if we could determine which physical disc parameters influenced sound quality, then we could determine some optimum combination, and use this as a carrot to the large disc replicators to get on board: not only would they get SACD technology, but they would also get the best sounding CDs possible (which would also be a feather in SACD's cap if we could show that the CD layer of a hybrid was the best). Since I had links to professional, golden-eared listeners, some of whom had mentioned pressing differences to us, I agreed.

We obtained digital masters directly from some key recording/mastering engineers, and also bought UK, US, European and Japanese pressings of the titles (all known to be sourced from the same digital master files). One of the titles, by a major band, was on both EMI and Columbia in various territories at the time, so we were sure the pressings in that case were made on very different equipment. We had a separate mastering facility make CD-Rs from the digital masters, and my colleague also made hybrid SACDs. Discs were then sent to the golden ears, and all they were asked was to rank them.

My colleague then did an extensive series of measurements to determine disc jitter, pit/land lengths, track pitch, flatness, eccentricity, tilt, reflectivity, etc., etc. We checked C1 and C2 error rates, looked at the EFM eye diagrams, considered the AC signal level of the critical 3T code value, etc., etc. We then tried to correlate the measurements with the rankings. There was no consistency to the data. A parameter that might be close in value on several highly rated discs would then be vastly different on another highly rated disc. No matter how we re-arranged the data, we could not find a magic subset of parameters (and their values).

Then we thought more about it. The Red Book does not dictate exact values for disc parameters, nor does it stipulate which servos, laser pick-ups, decoding ICs, etc. MUST be used, instead it defines all the physical parameters in terms of [typical value +/- tolerance]. Disc replicators are free to vary the parameters as they see they see fit, provided ALL parameters fall within their tolerances. Similarly, player manufacturers are free to choose optical pick-ups, clamping schemes, decoding schemes, etc. such that they can cope with any disc having any combination of parameter values (within the tolerances). Think about optical pick-ups, there are those that move along a radius, there are those that move along an arc, and then there are those that are fixed and move the disc instead. The signal from the photosensor can contain HF EQ in order to make reading easier, but this is not mandatory. There are various schemes to deal with jitter, there are different DAC chips, different crystal oscillators and ways of shielding them, different power supply strategies, etc.

Now, given the variety in player designs, why would one set of disc parameters provide best sound quality in ALL cases? Even if we assume that the phenomenon is real, we should not expect that a 'grail' pressing can be found that is optimal for all players. So, at the very least, stick with what you like!

For myself, I was given a complete set of the discs for a certain title. I had identical players for fast comparisons, I had Stax headphones, I had two listening rooms, one of them fully floating with an NR = 5 dB (so very quiet), I had no end of high-end monitoring equipment. At times, I could convince myself that I heard differences, but every time I went back to the other versions, I would hear the exact same things, at the exact same relative level, with the same placement in the image, etc. (For the record, I do not consider myself a golden-eared listener!)

ffracer said:
One question that rarely gets mention is what is the effect of jitter on DVDs and Blu-Rays?

If there is an effect, it is the same as for CD, since all optical media work in the same fashion. Moreover, hard-disks work in essentially the same way too. So, if there is a problem it should affect all systems, and yet it is common here to see claims that digital music played from hard-disk somehow magically works, and yet CD is subject to all kinds of issues. (Let's ignore known bad implementation issues here, and consider only well-designed CD players/transports.)

- Black Elk, forums.stevehoffman.tv, Jan 11th, 2013
 
Last edited:

cjf

Active Member
Joined
Apr 20, 2016
Messages
160
Likes
58
Location
CO
[QUOTE="
"Category 7 cabling was created to satisfy the demands of 10 Gigabit Ethernet. According to Wireworld, “Even though most media networks now run below that speed, cables that support higher speeds have been found to improve the quality of audio and video streaming. Those improvements are possible because streamed signals suffer from data errors that cannot be repaired by the error correction systems that preserve file transfers. The proposed standard for future networks is Category 8, which extends network speeds to the staggering rate of 40 Gigabits per second.”[/QUOTE]

I would love to know how their cable, with no active circuitry built in, is able to correct any errors if they existed in the first place?

What's next, Zero Crystal Glass tubes for fiber optic cables that are painted with a green pen to keep the light from escaping :)
 
Top Bottom