• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

When is a broken panther justified?

Yes, it is. The symbol should track the performance results, not expectations or wishes. The review text should discuss reasons for the results, including the "apparently broken" state.

This corresponds to the Results section and the Discussion section in a scientific paper.

The L30 example you give relates to fitness for a particular purpose, not performance per se. Again, that should be disclosed in the review text (tho I am not opposed to a 2nd panther wearing a pair of broken headphones).

Any other examples?

Bottom line, people can always disregard the panther (it is unlikely to crouch and attack unlike real ones).

Back in the day when magazines did proper technical evaluations, any equipment that didn't meet specification would be checked with the manufacturer before publication, in case the review sample was faulty/damaged. If possible, another sample would be tested, and the results reported, including the fact that the first one was faulty. I accept that this allowed for the possibility that the manufacturers sent a specially selected sample the second time, but nevertheless, readers would have been notified that the manufacturer's quality control may not be infallible.

I think this is a fairer way of treating equipment that's clearly sub-standard. Give the manufacturer the opportunity to respond before publication.

Magazines then and on-line resources now do have a responsibility for fairness, giving a product a poor review on the basis of one sample could condemn a good product to oblivion. Also back in the day, one magazine gave the excellent A&R Cambridge (as they were) C200/P200 pre-power amp a grudging totally subjective review, nothing like the gushing reviews given to other more fashionable products, and it killed the product.

Reviewers do have a responsibility for fairness.

S.
 
people can always disregard the panther
But do they? My feeling is that the panthers have gained rather more importance than they deserve.
This corresponds to the Results section and the Discussion section in a scientific paper.
But Amir's reviews are not scientific papers. We can really never know how representative an individual device review is without knowing real world production tolerances and inter-sample variability ...
 
But since it didn't exhibit that 'feature' during Amir's testing, it retains its golfing panther. Is that fair?
No, it is not. This review has been the inflection point in giving attention to asr for me.
 
Agreed. But if the L30 had zapped a pair of Amir's headphones, and thus presumably earned a headless panther, would that have been fair?
Topping would have immediately looked into, sent feedback, etc. And I have little doubt things would change accordingly. Consider several speaker reviews, revised because of feedback from the companies involved. Whereas, there's PS audio gear that got a drubbing here, no feedback because the company really has nothing to back them up. Topping has been one of the good actors here, have some perspective.
 
The posting where Amir mentions the issue further in the thread equals to the headless panther. Only that he did not dare to change the pic.
 
BTW, scientific papers often report anecdotal observations. Statistical validity or inter-sample variability are recent additions to the scientific process which date mostly from the advent of electronic computers.

And such non-statistical observations are still quite common in medical journals, esp. now in relation to covid and vaccine results.

I'm sure you could buy 20 samples of a DAC and send them in, thos sparse matrix sampling theory a la Box, Hunter, Box might provide a more efficacious use of time and effort.
 
Topping has been one of the good actors here, have some perspective.
As I said, I'm not criticising Topping, just trying to show that the panther score is more than a little arbitrary.
Topping would have immediately looked into, sent feedback, etc.
Has every company whose products have been reviewed here been given the opportunity to comment? Is it incumbent on any and every manufacturer to have an active profile on ASR to get good reviews? Have some perspective yourself ...
 
As I said, I'm not criticising Topping, just trying to show that the panther score is more than a little arbitrary.

Has every company whose products have been reviewed here been given the opportunity to comment? Is it incumbent on any and every manufacturer to have an active profile on ASR to get good reviews? Have some perspective yourself ...
I can't think of anything preventing any company sending feedback to this site. Did you check out Mikey's flaming feedback here last week?
Search results | Audio Science Review (ASR) Forum
 
Nor can I, but that's not really the point is it?
I have no idea what your point is. This place, as far as I can tell, is effectively transparent. If a company objects to a review and can back it up, we find out. Unlike what happens at just about any other website involved with reviews of audio gear.
 
Would people be happy if Amir replaced the headless panther with pantless hamster?

No thanks

1615229365881.png
 
I have no idea what your point is.
My point is that the panther is too arbitrary to have the importance it seems to have achieved – which in turn is the stated subject of this thread.
If a company objects to a review and can back it up, we find out.
We might find out - if the company is made aware of the review - if the company is even aware that ASR exists - if the company cares - if the company or product even still exists ...
 
Topping reacted wrong. Did not collect all the faulty items, a lot of them are now being sold on eBay as "brand new" with lots of buyers in danger of burning down their headphones if not their heads. Great !!!!
 
BTW, scientific papers often report anecdotal observations. Statistical validity or inter-sample variability are recent additions to the scientific process which date mostly from the advent of electronic computers.

And such non-statistical observations are still quite common in medical journals, esp. now in relation to covid and vaccine results.

I'm sure you could buy 20 samples of a DAC and send them in, thos sparse matrix sampling theory a la Box, Hunter, Box might provide a more efficacious use of time and effort.

:facepalm:
Science has required a valid mechanism for drawing conclusions since before the days of Newton and his anecdotal observations of an apple falling. The required validation depends on the type of conclusion being formed. If you want a statistical inference then you require statistical validity of sample. Not all scientific theories do.

Advent of computers (along with electronic data transmission) has not changed that only made it easier to do statistical collection and inferences. You don't need a statistically valid number of people to observe that the earth is round before accepting it as valid but can do so with corroborating evidence and experiments, for example.

Science also requires repeatability of whatever the claim is as a basic tenet so strictly speaking the none of the conclusions on this site is valid as a scientific inference. This makes a huge and significant difference between a sample space of one and sample space of 2, for example. So, saying we cannot do 20 so one should be valid enough isn't meaningful.

Let us not confuse the use of science-based apparatus and science-based procedures to imply a scientific conclusion here. It is an engineering measurement site and an anecdotal one which suffers from lack of any controls on the history, condition, sample variability, repeatability, etc.

Doesn't mean it is useless, of course.
 
Vasr, I am a research scientist. My comment as to statistical techniques stands. Nothing you said had any bearing on it.
 
BTW, scientific papers often report anecdotal observations.

Is that always the case even as the science becomes for certain, valid and established over time?

I can imagine at the forefront of the science as theories are posited that anecdotal evidence would still have a role to play but as the science becomes more axiomatic is anecdotal evidence dismissed or relegated to a lesser role?
 
Vasr, I am a research scientist. My comment as to statistical techniques stands. Nothing you said had any bearing on it.

The problem isn't with your mention of statistical techniques but in missing the point that not any random reporting is valid because some domains "report" anecdotal observations and can be accepted due to practical considerations. If the review here is simply seen as another anecdotal observation not any science-based conclusion of the performance of the device, then there is no disagreement.

No conclusions in any science would be accepted via anecdotal observations (simply reporting does not imply acceptance of conclusions or validity). Depending on the domain, they would need to be deterministically repeatable, or corroborated as valid with other direct or indirect evidence or the statistically validity of the observations itself established. None of which apply to a single measurement of a device here.

Yes, I was a research scientist as well in a former career not that appeal to authority is a valid argument.
 
Is that always the case even as the science becomes for certain, valid and established over time?

I can imagine at the forefront of the science as theories are posited that anecdotal evidence would still have a role to play but as the science becomes more axiomatic is anecdotal evidence dismissed or relegated to a lesser role?

Areas of study tend to expand. You might not see much on HIV reported anecdotally these days, but I do see several anecdotal papers in JAMA, New England J. etc. on localized rxns to covid vax injections. Most of what scientists do isn't really science.
 
Back
Top Bottom