• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Is that company making good products?

Rick Sykora

Major Contributor
Forum Donor
Joined
Jan 14, 2020
Messages
3,513
Likes
7,015
Location
Stow, Ohio USA
Am often asked whether I think a company is making good products. In the objective environment of ASR, you might think this is simple answer, but many seem to struggle with the difference between being a good company and every product being good technically. As an automation product manager at an ISO 9001 company, I worked on products ranging from computers and controllers to I/O and related software. If you are not familiar with ISO 9001, in short it does not ensure quality, but it does require that you demonstrate you have known, consistent processes in place to make products. You can find more details elsewhere but am just using as a backdrop to introduce a product risk assessment aspect that we used to help diverse teams assess risk.

The following chart is an example of a risk assessment matrix that we often used for engineering, marketing, quality and other team members to agree upon the priority of an identified product issue. My employer had evolved this to drive specific actions depending on the determined priority. You might notice that it lacks considerations for things like brand reputation, market alternatives and cost of remediation. These are important considerations and would be part of the evaluation effort. In the end, management would use the overall team input to make a determine what action was taken (if any).

csm_Risk_Assessment_matrix_533f204bb7.jpg


While you might expect that most audio companies are not often dealing with injury or death as a severity, I think this matrix helps with the context of why an audio product issue (minor cosmetic or functional issue) may not be a high company priority.

In the internet age, we have all become more sensitized to brand reputation. While no doubt fortunes are being made and lost on perception, it is still a soft issue. Am not aware of a more standardized assessment of brand reputation, but would be interested if anyone has. Did some quick perusing for a structured approach, but did not find.
 
Last edited:

anmpr1

Major Contributor
Forum Donor
Joined
Oct 11, 2018
Messages
3,722
Likes
6,405
1) While you might expect that most audio companies are not often dealing with injury or death as a severity... 2) Am often asked whether I think a company is making good products.

1) When I was growing up, it was not uncommon for gear heads to build their own components from kits. Dyna, Heath, Scott, HK and the rest of them. This mostly required the use of a soldering iron, and a few other common tools laid out on a kitchen table.

I pulled out my ST-70 instructions (circa 1960). In13 pages of instructions, the only 'warning' to the consumer was a reminder to install the metal cage if anyone or any pet could possibly touch the tubes, since they get very hot. Not even a caution about dripping hot solder on your fingers, or accidentally touching the tip of the iron (which of course everyone did).

Today, almost any consumer product has one or two pages of operating instructions, but four or five pages of warnings: don't eat this little packet of whatever is inside; don't take a bath with the device, if it is plugged into a wall socket; don't give your kid the plastic bag it came in to play with, because a plastic bag is not a toy; and whatever you do, don't take the top off to look inside.

Back in the '60s no one I ever heard of suffered injury or death from their Dynakit. I don't know how we did it. Without five pages of survival warnings, we were definitely fortunate. Boomer luck, I guess. :)

2) As far as brand reputation and 'good' products? For the consumer it's always a toss up. Longevity in the marketplace is one way to judge, but it is not always a sufficient way.
 
OP
Rick Sykora

Rick Sykora

Major Contributor
Forum Donor
Joined
Jan 14, 2020
Messages
3,513
Likes
7,015
Location
Stow, Ohio USA
While longevity is an indicator (notably for smaller companies), even those with questionable claims, seem to be sustainable if they generate enough customers. This is probably truer today than has been in the past. An influencer simply needs to be well liked even though they may not have much real credibility.

As for companies with tangible products, I think the key is a desirable product that is reasonably affordable and supported. The 4 Ps of marketing (Product, Price, Place and Promotion) have still held up well as a basic tenet. As for a structured way to assess a good company, am sure there are decent models to be found. As with some aspects of any assessment, the metrics may be harder to determine. When this happens, many simply fall back to personal experience/opinion.
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,656
Likes
5,819
Location
US East
The chart you showed is usually part of the formalized process of FMEA (failure mode and effect analysis). FMEA was an activity I regularly participated in when I was working in product design. The most useful part in the exercise, IMHO, was to seriously think about (brainstorm) all the possible ways your design can go wrong, and what the effects of the failures can be. To me, the numbers are just a paper work exercise formality, and to document that an FMEA has been done. Competent engineers will know what are required to address each of the failure modes without the calculated numbers. If the ranks of RPN (risk priority numbers) don't fit the intuitive feel (e.g. certain failure modes have too low or too high RPNs, based on the collective experience or intuitions of the participants), the inputs (S for severity, O for occurrence, D for detection) will be "adjusted" to make the outcome more "reasonable", since the S, O, and D values are usually subjectively assigned to begin with.

Judging by many of products we have seen here (e.g. Topping L30, Schiit Sol, Carver Crimson, ... and many others), I seriously doubt FMEA is a tool used by many of the consumer audio equipment manufacturers, especially the smaller ones.

And don't get me started on (the lack of) FMEA on user interface designs.

 

Norcal

Member
Joined
Sep 29, 2022
Messages
35
Likes
45
The chart you showed is usually part of the formalized process of FMEA (failure mode and effect analysis). FMEA was an activity I regularly participated in when I was working in product design. The most useful part in the exercise, IMHO, was to seriously think about (brainstorm) all the possible ways your design can go wrong, and what the effects of the failures can be. To me, the numbers are just a paper work exercise formality, and to document that an FMEA has been done. Competent engineers will know what are required to address each of the failure modes without the calculated numbers. If the ranks of RPN (risk priority numbers) don't fit the intuitive feel (e.g. certain failure modes have too low or too high RPNs, based on the collective experience or intuitions of the participants), the inputs (S for severity, O for occurrence, D for detection) will be "adjusted" to make the outcome more "reasonable", since the S, O, and D values are usually subjectively assigned to begin with.

Judging by many of products we have seen here (e.g. Topping L30, Schiit Sol, Carver Crimson, ... and many others), I seriously doubt FMEA is a tool used by many of the consumer audio equipment manufacturers, especially the smaller ones.

And don't get me started on (the lack of) FMEA on user interface designs.

I spent many years doing risk management for multi-national medical device companies, so this construct is very familiar to me. The construct can be easily adapted (or downgraded, so to speak) to a less-risky environment. If you think of the 'hazard' as an 'undesirable event', you can populate that top row of Rick's chart with a nice continuum of non-safety stuff quite easily and still get the benefit of using the construct, which is really about where you place your efforts and rigor in terms of decision-making in the design process. I agree with NTK that this level of rigor probably is not followed in the audio world very often.

As an aside, in recent times, much of the implantable med device world has migrated away from the use of detectability in the risk equation favoring severity and occurrence as the two factors to weigh.
 

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,866
Likes
5,953
If you look at companies like Sony, Genelec, and Yamaha, all of their electronics manufacturing is ISO 9001. I don’t know if D&M are ISO 9001, but the way Classe talks about the level of documentation required when they were acquired and moved production to Shirikawa Audio Works, it certainly sounds like they follow a similar process. Under Masimo, I can only imagine that integration of the quality system is in process.

What is also important to add is that ISO certification adds costs both for the “extra bit” of time and effort to account for their rare quality issues but also on-going 3rd party audits.

Small boutique manufacturers like Peachtree can subcontract to ISO9001 contract manufacturers but I don’t know if their design process follows the same rigor. It’s very likely that Topping uses contract manufacturers who are ISO 9001, but again, it is not clear their design process follows the same rigor.
 

Plcamp

Addicted to Fun and Learning
Joined
Jul 6, 2020
Messages
860
Likes
1,315
Location
Ottawa
Similar concepts were routinely applied in all areas of product design in the telco companies I worked.

Even the change of the shape of a button on a handset was subjected to extensive trial, measurement and satisfaction feedback before being committed to volume product offerings. Such attention to every detail was critical, because any mistake on a product intended for high volume that’s as ‘in your face’ to real end users as their business phone could jeopardize major equipment sales.

And hardware was less of a problem than software, particularly as new digital features were initially introduced. Nobody had a good proven process for sw.
 
OP
Rick Sykora

Rick Sykora

Major Contributor
Forum Donor
Joined
Jan 14, 2020
Messages
3,513
Likes
7,015
Location
Stow, Ohio USA
Good point about ISO certs may not be across all phases of a product lifecycle. And should emphasize that being certified does not ensure product quality either. I suggest it is the discipline around having structured approaches to key processes that is more important.

In any case, there are risks to being in a (electronic) product business. Part of those risks include possible harm and/or property damage. I have seen threads here that try to imply that certain companies are not reputable because they have a product defect. Any product can be defective simply due to human error and this does not make tham a bad company. I would judge them more by how they address a problem once it is identified.

Others have suggested that safety certifications for individual parts might be sufficient to indicate the product is safe. But just because the component was certified under certain conditions does not necessarily mean the product was. All certifications add product cost and some are prohibitively expensive for a smaller company. It was mentioned that Purifi amp modules do not have the same safety certs as their competitors. This may be cost or may be the time involved to get certified but would not use it as a sole factor for whether the product is safe or not.

I have some experience that I thought might be productive to share. Am sure others do as well. I do not have a structured way to judge whether a company produces good product. Am hoping this thread may help others understand what may go into making a good judgement call. Am hoping to learn along the way too!
 

Norcal

Member
Joined
Sep 29, 2022
Messages
35
Likes
45
In the broadest sense, here’s the way I personally see a ‘structured’ approach. You’d need the following components:
  1. A list of factors that are important to you
  2. Some sort of weighting, because most likely all those things in (1) are not equally important
  3. A means to evaluate those factors
  4. Data to perform the evaluation
  5. Some sort of cutoff in terms of acceptability to draw the line between ok and not ok
As an aside, think most folks do this mentally to some extent whether they realize it or not for many decisions (certainly important ones); however, if every decision we make becomes a complicated accounting exercise, we’d be very hard to live with in the real world.

For products like those on ASR, my list would look something like this (and I'm sure I've forgotten something important):
  1. Safety - I honestly don’t spend time worrying over safety of non-homemade electronic components in this day and age. Catastrophic events seem to be quite rare. Others may have greater angst.
  2. Feature set - Is the feature set reasonably aligned with my current and likely future use cases? Use cases vary a lot, though, and a good product for one may not be a good product for another because of that (witness the non-recommendation of the WiiM Mini because it wasn’t Roon compatible)
  3. Usability - Does it work intuitively, is the UI easy to navigate, is any software buggy etc.? Again, this can be highly personal as different folks have different tolerance levels for things like this.
  4. Appearance - This isn’t all that important to me, but it is to some. Highly personal obviously (Genelec speakers are ugly, no they’re beautiful, etc). I will admit that font choices kind of trigger me, but I doubt it would keep me from getting a product that was great otherwise :)
  5. Performance - Does the thing do its primary job at a level that you wish for given whatever its price point is? ASR likes SOTA stuff, particularly if it is competitively priced, but SOTA may not always be what folks need or can afford.
  6. Reliability - Again, tolerance for products that stop working after some time depends on the individual, the cost of the product (which can drive expectations) may be a factor, but most of us want something that will continue to perform to our need over the timeframe that we think we will own it.
  7. Service - if there is a problem with the product, will the company do a ‘good’ job of resolving it to your satisfaction? Personally, I don’t like ‘good service’ to be a crutch for poor reliability - it may take the sting out of a product that breaks, but I’d rather the product to be reliable in the first place.
  8. Cost/Value - this one is obvious, but again highly personal
IF we take #1 as a given, we can reasonably evaluate 2, 3 (subjective reviews), 4 and 8. #5 can be dicey if a reliable set of measurements have not been published, but manufacturer specs can be at least a little helpful. The problem we have is (if #6 is really important to us), it is near impossible to have actual reliability data on this sort of gear. We end up looking at anecdotal events of reports of failures and trying to cobble together whether we think those anecdotal events wind up looking like a trend that would cause us to be fretful for our own individual piece of gear’s reliability. Similarly, #7 is hard to objectively evaluate, but over time, companies have built up reputations that may not predict how they’ll act in each and every case, but will give us some directional read.

So, we end up putting a varied bunch of information into our personal weighing system, turning the crank and getting a ‘feel’ for whether the product is over our own threshold of ‘good’.

TLDR: To answer Rick’s original question then - I would say that a ‘good’ company is one that consistently produces products that a wide swath of buyers would rate as ‘good’ for them after those products have gone through all those personal algorithms. Sadly, ‘good’ is a lot more subjective than we’d like to think.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,008
Likes
5,604
Location
San Francisco
So, I can speak to the brand risk element discussed earlier.

There are standard(ish) metrics for brands. Generally, what's measured is:

  • Aided awareness: What percentage of people say they've heard of your brand when it's asked about
  • Unaided awareness: What percentage of people mention your brand when asked to name brands in your category
  • Favorability: How well people rate your brand in general, usually relative to direct competition
  • Familiarity: What do people know about your brand, i.e. do they know what you do / make?
  • Associations: How strongly do people relate your brand to the ideas you want associated with your brand? e.g. Ryanair association = "cheap".

These are properly measured by having a few hundred to a few thousand people take a survey. This is pretty expensive (just did one at work and it cost enough to buy a house) so not every firm does it, and if they do, it's typically only once or twice a year.

Brand risk is hard to quantify while doing scope or specs for a product, because for ONE launch, typically only associations and maybe favorability will be at risk. If your main brand association is "cheap" and the product is costly, then that perception would be at risk.

However, I'm not totally surprised that risk to brand associations / reputation isn't always accounted for, since it would tend to double-count the risk of certain things. Performance specifications tend to implicitly account for brand associations - if good sound is part of your brand, it is PROBABLY also part of the product spec and project plan. So if you add another brand risk metric around sound quality, you're doubling the sensitivity to that factor, perhaps wrongly.
 
D

Deleted member 60987

Guest
Am often asked whether I think a company is making good products. In the objective environment of ASR, you might think this is simple answer, but many seem to struggle with the difference between being a good company and every product being good technically. As an automation product manager at an ISO 9001 company, I worked on products ranging from computers and controllers to I/O and related software. If you are not familiar with ISO 9001, in short it does not ensure quality, but it does require that you demonstrate you have known, consistent processes in place to make products. You can find more details elsewhere but am just using as a backdrop to introduce a product risk assessment aspect that we used to help diverse teams assess risk.

The following chart is an example of a risk assessment matrix that we often used for engineering, marketing, quality and other team members to agree upon the priority of an identified product issue. My employer had evolved this to drive specific actions depending on the determined priority. You might notice that it lacks considerations for things like brand reputation, market alternatives and cost of remediation. These are important considerations and would be part of the evaluation effort. In the end, management would use the overall team input to make a determine what action was taken (if any).

csm_Risk_Assessment_matrix_533f204bb7.jpg


While you might expect that most audio companies are not often dealing with injury or death as a severity, I think this matrix helps with the context of why an audio product issue (minor cosmetic or functional issue) may not be a high company priority.

In the internet age, we have all become more sensitized to brand reputation. While no doubt fortunes are being made and lost on perception, it is still a soft issue. Am not aware of a more standardized assessment of brand reputation, but would be interested if anyone has. Did some quick perusing for a structured approach, but did not find.
Well unless they are very high end or boutique, companies offer the same kind of thing with drastically varying prices. At least 10 times. In some cases 100. Koss makes $10 headphones and $1,000 headphones, for example. The real question is what is the price and quality of a particular product? If it's $400 it better deliver.
 
Top Bottom