• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

PSB Alpha P5 Speaker Review

BYRTT

Addicted to Fun and Learning
Forum Donor
Joined
Nov 2, 2018
Messages
956
Likes
2,454
Location
Denmark (Jutland)
Noob question here. Could anyone explain me a bit about "Directivity"?
How could we know there's error in the graph? What is it and how it affect the sound?
For your interest it should help study below animation with EQ exercise for P5, resonances into directivity index plus its non smooth dump at 2500Hz when tweeter takles over is culprits why its impossible to EQ smooth.
This speaker looks similarly bad as the Sony SSCS5 which apparently was transformed to another world after EQ.

Amira, will you please sit tight on this speaker for a day or two? I'ma solicit some EQ for you to try.

@QMuse @flipflop @Maiky76 @pierre pretty please

Not shure EQ will help for Amir's depressing day, its not easy to EQ P5 and he did know that a EQ job would get hard based on non ideal directivity index, plus look at Amir's near field graph that reveal huge port leaking and combined then ugly distortion graphs can understand he gave up on EQ task, in meantime until some other users input their EQ suggestions below animation of 4 times EQ targets can be studied, the four targets are 1) smooth on-axis, 2) smooth CTA2034 listening window (hor -30º to +30º, ver -10º to +10º), 3) smooth PIR (equals EQ a listening window based on hor -150º to +150º ver -150 to +150º), 4) smooth power response..

Spacevector_1x1x1x1x1x_4000mS.gif
 

BYRTT

Addicted to Fun and Learning
Forum Donor
Joined
Nov 2, 2018
Messages
956
Likes
2,454
Location
Denmark (Jutland)
.....In this review, I went way beyond that due to a comment was made in the last review. That the beamwidth is very revealing of directivity errors whereas contour map is not. I realized part of the problem there was that I was compressing the vertical axis a lot, causing it to lose detail in that axis. So I enlarged it to almost square aspect ratio. I am not sure it helped a lot but that is the reason.

Open to feedback on this.

Feedback time :).. it looks because beamwidth map is revealing of directivity is simply because its based on normalize setting and contour map is not, so wonder if contour map have the setting feature to enable normalize and also it can help the eye if it has feature to enable countour lines, see example in animation below for P5 that show as same the contour map has normalize and countour lines enabled it present the revealing directivity as in yours beamwidth map, well except below contour map is based on 10º steps your is probably a impressing 1º o_O..
Amir_polar_settings_1x1x2x_1500mS.gif
 

TimVG

Major Contributor
Forum Donor
Joined
Sep 16, 2019
Messages
1,200
Likes
2,647
Feedback time :).. it looks because beamwidth map is revealing of directivity is simply because its based on normalize setting and contour map is not, so wonder if contour map have the setting feature to enable normalize

Perhaps normalize on the listening window then as the normalizing on the on-axis can give a wrong impression of speakers featuring certain small acoustic anomalies on the 0° axis. What do you think?
 

Midwest Blade

Senior Member
Joined
May 8, 2019
Messages
405
Likes
542
Pop! Another bubble burst!

Have always liked PSB's, my brother runs them in a home theater 5.1 system and the sound was always great. I have listened to the top end bookshelf and one of their towers at a dealer here in Chicago and was generally impressed. Oh well, second thoughts now.
 

SMc

Active Member
Joined
Dec 21, 2018
Messages
273
Likes
225
The upside down tweeter designs are normally designed for listening / measuring on midwoofer axis, Mission started this concept years ago. I'm not sure if there's some complexity in making nearfield measurements and if this "effect" is only evident in the mid field.
NHT used this configuration saying their customers were placing speakers too high.
 
  • Like
Reactions: wje

starfly

Senior Member
Forum Donor
Joined
Jun 6, 2019
Messages
353
Likes
289
This is how bad their $7k standmounters are (Persona B). I still don't understand why anyone buys Paradigm. Listening window graph below. Full set at https://www.soundstagenetwork.com/i...&catid=77:loudspeaker-measurements&Itemid=153

When I was in the market for speakers last year, I demoed the Paradigm Premier series, their mid-level range. They just sounded kinda weird to me, with the mids sounding kinda floaty and disconnected. I ended up going for the Revel F206 instead, and zero regrets with that decision.
 

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
150
A number of you have advocated to only post measurements in these reviews. This is for you: what does this say about the sound you get in your room?
Speaking only on my own behalf - that they will most likely sound like crap. What kind of flavor of crap exactly? Personally I don't care.
I guess those who already own those speakers will be curious about possible ways to de-crapify them, like using an accurate EQ based on your measurements or dealing with problematic reflections in some kind of way. But that doesn't require any subjective impressions either.

Thanks for the review.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,007
Likes
6,874
Location
UK
Wasn't there another Paul Barton speaker that sucked, one that was reviewed recently?

P.S. I like the fact that YouTube video links are put into the reviews showing the music from the listening tests, especially if specifics of failure to play parts of them properly are involved, because we can then listen to the track on our headphones for example and see exactly what the capabilities of the speaker are in terms of the parts that couldn't be rendered properly, eg the bass in this case.
 
Last edited:

b1daly

Active Member
Forum Donor
Joined
Dec 15, 2018
Messages
210
Likes
358
I almost wrote that in the review. At this rate, I won't have single industry friend left! Already lost a bunch. :(
That’s too bad. I’m a little surprised. I wonder if folks are taking these reviews personally or if it’s just a reflection that there are business consequences that make people/manufacturers need to be wary?

Of course, we’re all human and receiving criticism can hurt.

But honest criticism is so valuable. This reminds me of something I have pondered for years.

A big problem amateur musicians have is that people are reticent to be critical. So after a show, people either say, “sounded great” or the say nothing at all.

This deprives musicians of objective criticism (subjective opinions are an objective fact, if someone thought you sucked that is a real thing) and it’s very hard to improve beyond a point without it.

The root of the problem is there is no upside to sharing a negative opinion about a performance for the person who holds that opinion. As a performer you have to seek out honest, frank criticism. If you are not at the level where you are getting actual reviews you actually might need to pay someone to give you an honest appraisal.

The situation is obviously not parallel but it made me think generally about how valuable honest criticism is. The kind of feedback ASR is providing, which represents a rather large investment by Amir, could be used to improve and gain competitive advantage.

There is an issue with the approach taken by ASR that I could see a manufacturer feeling justifiably salty about.

The ‘rankings’ here for gear (non speakers) reflect attributes that are not audible. It’s fair to consider that such parameters should be largely irrelevant to an owner’s enjoyment of the equipment.

Someone made a comment a while back that investing resources into performance aspects of gear that have no objective benefit to the owner is a case of “over-engineering”. Arguably the best engineering would allow the manufacturer to provide the best experience to the purchaser at the lowest cost. Or to put engineering and manufacturing resources towards aspects of the gear that do affect the users experience like features, aesthetics, reliability.

It is impossible to not be biased by reading a ‘critical’ review, even if the criticism is on criteria that are inaudible. Sometimes owners of gear that get bad reviews here express a sentiment that they are now disappointed. This really misses the point about an ‘objectivist’ approach to audio.

Finding out about the absolute measured performance of an amplifier is a fascinating subject, but drawing conclusions from measurements needs to be done thoughtfully.

Seeking the absolute best performance in an amp, performance for performance sake, could be thought of as a pursuit similar to those who want the fastest car, even though they will never drive at those speeds.

This is a poor analogy though, because objective performance of an automobile is perceptible by human senses. (Perhaps not at the highest levels).

ASR has the word ‘science’ in the name, and I think this captures the ethos of the site. But what Amir is doing is not really science. He is measuring things and collecting the data. To do science with the data would require crafting a thesis and finding out whether the data support the thesis or not.
 
Last edited:

infinitesymphony

Major Contributor
Joined
Nov 21, 2018
Messages
1,072
Likes
1,809
That’s too bad. I’m a little surprised. I wonder if folks are taking these reviews personally or if it’s just a reflection that there are business consequences that make people/manufacturers need to be wary?
Even small speaker manufacturers are shipping their speakers to test facilities that have Klippel and other measuring devices. These days there is no excuse for not being aware of the performance of your company's devices. If a manufacturer doesn't know or doesn't want others to know the performance of what they're releasing, it says something about the manufacturer's priorities (i.e. profit-driven rather than performance-driven).
 

b1daly

Active Member
Forum Donor
Joined
Dec 15, 2018
Messages
210
Likes
358
Even small speaker manufacturers are shipping their speakers to test facilities that have Klippel and other measuring devices. These days there is no excuse for not being aware of the performance of your company's devices. If a manufacturer doesn't know or doesn't want others to know the performance of what they're releasing, it says something about the manufacturer's priorities (i.e. profit-driven rather than performance-driven).
I think this is an unrealistic assessment. There are significant costs and complexity to bringing a product to market The objective performance of a speaker is only one such concern.

The original Harman research was based on comparative listening tests. Just because speaker A ranked below speaker B doesn’t mean a purchaser will be unhappy with speaker A.

FWIW I’m not convinced that the spinorama comparisons are the end all of speaker design criteria simply because I have owned speakers that “measure well” that I hated.
 

LeftCoastTim

Senior Member
Forum Donor
Joined
Apr 15, 2019
Messages
375
Likes
758
Strange EQ applied to all music

One thing audiophiles taught me is that they LOVE LOVE LOVE random EQ and distortion applied to all music.

Regular people too, considering how many sound bars have "Sports / Movie / Classical / Rock" EQ modes.

People understand "color calibration" these days. The above also explains why "sound calibration" isn't a thing.
 

infinitesymphony

Major Contributor
Joined
Nov 21, 2018
Messages
1,072
Likes
1,809
There are significant costs and complexity to bringing a product to market The objective performance of a speaker is only one such concern.
Why would a small speaker manufacturer be able to meet this bar but not a larger manufacturer? Kali Audio (est. 2018) are one example.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,916
Location
North Alabama
I think this is an unrealistic assessment. There are significant costs and complexity to bringing a product to market The objective performance of a speaker is only one such concern.

Last I recall, Warkwyn charges $500 to test a speaker using the NFS.

Assuming the price hasn't changed, that's cheap relative to the overall costs of bringing a speaker to the market. And if nothing else would provide tangible data for the audience. Even if you plan to not change a thing based on the results.
 
Top Bottom