tuga
Major Contributor
Almost everyone i know has an Apple phone with Dolby Atmos capable Apple earphones or headphones with headtracking.
Ha, but do they know it?
Almost everyone i know has an Apple phone with Dolby Atmos capable Apple earphones or headphones with headtracking.
I don’t know a single person other than me in my life that has a stereo system.
Statistics back up my statements too. Headphones are not stereo either no matter how delusional you are.
Good to know.Yes it did. It did have a preferred position which was outside the room unplugged
What do you mean?Ha, but do they know it?
What do you mean?
…
As far as how the speaker couples into the room modes the optimal position of the loudspeaker will depend on the acoustical properties and dimensions of the room and the locations of the listener(s). To say that the optimal position is different for every speaker is nonsense, especially for conventional speakers where below the room transition frequency the speaker is close to a monopole and will couple into the room modes much the same way.
As an aside if there were different optimal locations for different speakers I’ve never seen this specified in the loudspeaker setup manual. And wouldn’t they be different for different rooms? And there would no need for room correction and calibration.
…
The "Olive score" is based on frequency response. Frequency response is the single most important predictor of sound quality. Distortion is only important if its well above audible threshold. Compression is essentially distortion.
In most of our listening tests distortion was not a factor with the home speakers we tested in our listening room at the average level ( 80 dB-weighted SPL). Even with the small 5-6 inch bookshelf speakers, distortion was not a factor.
Your worries and woes have all been thoroughly answered in the thread if you’re interested.To what extent is the score based on frequency response? Does it take other factors into account (I've been trying to find the paper, could someone please link it).
If the score is heavily weighted in favour of frequency response, maybe this would explain the OP saying they can't trust the score regarding preference. They give the example of the following speakers and their scores:
The KEF Reference 2C Meta: 5.6
Sonos Roam: 5.5
JBL M2: 5.1
In reality, these are very different sounding speakers and would not be preferred anywhere near as closely as the score suggests. A speaker with the ability to play dynamic peaks at reasonably loud volumes and with bass response to 30hz (JBL M2) is going to be heavily preferred (sonically, if not aesthetically) to a Sonos Roam, yet the score suggests they are practically on par with each other.
I purchased Genelec 8030c speakers based on the high score they received in ASR testing, yet I found the speakers lacking compared to the Behringer B2031A speakers I already had. The B2031As produce most of the usable frequency range, down to 35hz or so, and have a greater sense of control at anything above moderate levels than the 8030c, yet I doubt they would score close to as highly as the 8030c.
However, I imagine if you placed people in front of these two speakers (sans subs) and asked them to choose which they prefer, perhaps 7 out of 10 would prefer the Behringer, because the missing bass frequencies are evident to the extent where any improvements offered by the Genelec (flatter FR) are not as significant.
Is there a component of testing that penalises speakers for not reproducing frequencies down to, say, 40hz.
If not, this would explain why small, but SPL & bass extension limited, bookshelf types speakers that have a flat FR (namely Genelec) score highly, when larger speakers that are more capable in both SPL and bass response score rather more poorly.
In testing, I can believe that a flat FR is preferable, all else being equal, but when SPL and bass response capabilities of a speaker are rather limited (most bookshelf speakers), then I imagine listeners will accept more variation in the FR (to an extent) in exchange for bass extension and SPL.
Is this something the score can account for? Is it something it should account for, in your opinion.
Thank you for your reply but you have failed to answer my question. One wonders why you will avoid it...Loudspeaker technology and the science behind it is pretty mature. I am quoting papers that are 37 years old that are still valid today. The science has been peer reviewed and the results replicated in other labs of universities and other loudspeaker manufacturers. It’s no longer a controversial or disputed topic within the industry. If you think it’s controversial you are not well-informed.
The loudspeaker industry has generally accepted the science of what makes a loudspeaker sound good; there are new standards that define what is good and how to measure it, and it’s widely practiced within the industry
If you go to an ASA or AES conference there are almost no papers on what makes a loudspeaker sound good. Most of the attention is to make loudspeakers sound good in smaller form factors, cheaper, play louder, compensate for room modes, or do beam steering arrays to deal with room acoustics or simulate virtual speakers and spaces.
AR, VR, mixed reality, and immersive audio is the focus for applications in the home, the car and mobile.
Can you link the post numbers please? Apologies for repeating what has been said already, you know how it goes with these long threads.Your worries and woes have all been thoroughly answered in the thread if you’re interested.
You are obviously not one of the 100+ million Apple Music users. I doubt if many audiophiles have heard it but Apple Music users have, although they heard it called Spatial Audio.I wonder how many non-audiophile people have heard of Dolby Atmos. Or cared.
You can see my opinion on that particular comparision hereCan you link the post numbers please? Apologies for repeating what has been said already, you know how it goes with these long threads.
Thank You a lot!In these tests Robert and Sam heard the same four loudspeakers that have been evaluated previously by hundreds of untrained listeners including young, old, American, Asian, and European listeners, whose preferences and performances were compared to those of our panel of trained listeners. From these tests, we have found evidence that most listeners prefer the most accurate, neutral loudspeaker regardless of age, culture or listening experience.
from this blog by Olive.
Behind Harman's Testing Lab
This past week I had an enjoyable time meeting well-known technology writer Robert Scoble who was visiting our Harman facilities in Northri...seanolive.blogspot.com
Preference means when given four speakers to choose from they rank them by which they consider best sounding to worst sounding. Seems simple enough. And they've found MOST listeners prefer accurate and neutral.
While being designed for corner placement or flushed against wall is rare, there's still a difference in room gain in regards to distance to walls. One could of course equalize the speakers to have the same low frequency level to remove this difference but especially smaller woofers would have an increased distortion and effect the test, and you're not testing the sold speaker design anymore. If you on the other hand listen with some speakers closer to the front wall and further from the listener, you increase the room influence and you're not comparing apple to apple anymore.Not many home speakers are designed for corner placement or wall placement. If they are that’s how we would test them. This might include soundbars and inwall speakers.
We built a listening room with an inwall speaker mover for that purpose.
Powered Pro monitors often include switches that provide different equalizations for different boundary conditions. We would test those using the appropriate setting for the setup.
I can count on one hand the number of dipole and cardioid speakers Ive tested in my career.
Oh for sure i expect the Sonos Roam to be at least 2 points lower at least. The KEF would be at least one point lower.OK, but are you arguing that certain types of testing are invalid. Do you think the numbers produced would be significantly different if the testing was made uniform for those three speakers?
Either way, I'm not sure that post alone comprehensively answering my "worries and woes", as you put it (nice turn of phrase, btw )
Ok, but say we compare the Neumann KH80 DSP to the JBL M2 (both graded from spin data, correct?), do we really expect a speaker that produces the full audio spectrum, sans sub frequencies (M2), to be topped in preference by a speaker that is incomplete, in that it does not produce the full frequency range, and will only produce sound to moderate levels at best? It seems unlikely.Oh for sure i expect the Sonos Roam to be at least 2 points lower at least. The KEF would be at least one point lower.
Ok, but say we compare the Neumann KH80 DSP to the JBL M2 (both graded from spin data, correct?), do we really expect a speaker that produces the full audio spectrum, sans sub frequencies (M2), to be topped in preference by a speaker that is incomplete, in that it does not produce the full frequency range, and will only produce sound to moderate levels at best? It seems unlikely.
While I'm don't believe this influences the researchers to a major degree, it does say something about the challenge of such studies.