here ill simplify it for you
in my professional opinion i think these things:
analytical data pulled against "used" cables is very very useful. as this demonstrates what is available from tip to tip that any said consumer is experiencing while in use. i do not believe new cables are exemplary examples yet, as i am on the fence still about cable break in.
i believe that to perform a test it must be done on the spot, without changes to user position, opinion based on group events, and or with time in itself being used as trickery to determine if said persons can hear the difference.
one of the flaws in testing anything is TIME. time creates uncertainty where immediate results can determine a actual real life change, not cognitive perception rather.
i do believe cables have a range. and i do think it would be impossible to determine the benefit of any cable if they were too close in initial specifications. meaning a variation of a small percentage is fractional percentage in quality would be a test fail. one of these topics is subjective to the fact that almost no PURE COPPER ONLY CABLE coming from numerous companies can not be determined to be any different, but similar if not from same manufacturing processes or supply chains. you cannot, establish a difference when these standards are so similar. a example would be a cheap copper cable, against MAYBE the best mild priced copper cable. its still just copper and that name brand company probably isn't being honest and they are claiming high grade copper for a upsale. i value this concept since often name brand companies label high grade for basic run of the mill product.
now i do think from a stastic scale, from cheapest materials.......to.........highest grade materials we will se a noticeable difference. this is the testing that should be practiced. testing a run of the mill cable against a seriously high end cable. there is no slider for the in between or cables that are leaning to one side. you will either have a noticeable difference or not. plain and simple for the user.
the test procedure i would agree could actually validate blindly is....
two systems that are side by side, say one of the room to the other, and the consumer only has to turn around. no gaps in playback. just muting each side.
each has identical hours on both systems and speakers. preferably extremely high resolving components and speakers. i vehemently believe that cheap stuff may be lacking in ability to resolve or present these artifacts or benefits.
two cables tested, side by side, no gaps, and identical hardware and hours on the components, meaning not two new systems of different nature, they can be close to new, but often testing is done with whatever hardware people have laying around and this is a lackluster for results.
my argument is time itself is a results problem. changing new cables is a results problem. we have to take into account all the factors people claim when they make claims about liking one cable over the other. there has never been a side by side mirror system test with. only cable differences and mute over each side would be empirical to side by side testing.
if you wanted to take the ten best cables on the market and dish them out against a cheapo cable. you would need ten systems all lined up and wired with only a cable change. you cant just swap cables or do a a/b with different hardware and expect people to figure it out.