Makes me sad to hear all these reports. It’s like take one step forward, two steps back. I really hope that there is an official response to this to ensure there’s no long lasting damage to the brand’s reputation.
In terms of what this means for @amirm and his testing/recommendation process - I understand that he does not do long term testing due to obvious time concerns. However, in the interest of pushing the boundaries of the audio industry once again like he’s done with SINAD, it would be fantastic if he could rig up some kind of accelerated torture test for audio equipment. Especially if these items are being recommended for purchase. Obviously, this would not be done to user-submitted units without prior approval.
This gets asked often. Understandably. Let me try to give an explanation of the actual challenges...
Quality and Reliability are a statistical metrics, and as such cannot be measured or observed on a single sample. Needs a population, and that population needs to be worn out to fail (typically though accelerated life testing, often at elevated temperatures and voltages). And, some of those wear-out tests need to be done on subsystems (like, you can't bake a display and a power supply at the same temperature!!!) Depending on the Defect Rate you wish to drive to, this will involve tens or hundreds of units and subsystems aged to the end of life. End of life means the units are dead.
This is a difficult job even for the manufacturer. To do it right, the engineering team needs detailed knowledge of the design, the components, use condition, even the software stack. They need to simulate across multiple use conditions to see what the potential failure mechanisms are. They then need to develop all of those stress-to-fail conditions so they can ensure reliability in the field, as well as manufacturing screens for any residual reliability/quality issues. Even this is not enough, since the biggest source of rel data is the customer!!! And, judging by the totally incompetent customer service from the resellers, Topping is not likely getting good field-reliability returns so they can make improvements.
Lot's of people have suggested that ASR can or should do quality or reliability testing.
NO, this is Topping's job and only they could have done it right. Any attempt to judge reliability or quality on a single unit on Amir's test bench would have been misdirected. And, it seems Topping didn't do this hard work.