• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

KEF versus Revel. Anyone here owned & compared them in person?

I just swapped out the Salon2s for a pair of new KEF Blade2 metas. After much comparison the Blades are a better fit in my room. Both are fabulous speakers but I do like the Blades a bit more. In my room the image better and have a better sound stage. In fairness, I supplement the Blades with a pair of KC92 subs and plug the ports on the Blades. After doing an ungodly number of measurements using REW, I am close to being satisfied with the blended response. I sold the Salons2s and there are some smoking good deals on used Salons2s these days, so if money is an consideration the Salon2s are the way to go IF you have a lot of amplifier and a large room.
 
Gene from Audioholics compared the bigger Perlisten S7T with the Revel F328Be. He preferred the Perlisten and felt its: THX certification was useful; build quality was higher; a more modern and cutting-edge design over-all; but he did not like the S7T's bottom port. . They've measured the S7T, which were $22K a pair in the video. Latest F328Be price is ~$19.4K. :cool:

 
Last edited:
A lot of people are chiming in on the flagship models but I compared the M105 to the LS50 years ago and have a writeup on AVS forum for those interested. Both brands are engineering focused and generally sound great. When listening to 1 pair and swapping them out with the other it was so hard to choose a winner, I literally had to compare them Harman style with 1 of each in mono while being able to instantly switch to be able to reliably tell differences, that's how close they were. I ended up with the LS50 mostly because I love the point source sound, the Revels were still great though.
 
tweeter_height.jpg


What is Revel's theory of optimal tweeter height?

14ky2k.jpg
 
Further down the pecking order, I have owned both KEF LS50 (pre-Meta) and Revel Concerta2 M16. Both were typically sold here in UK at around £800-999/pr. I bought both pairs used (but nearly new) to try two of the most recommended speakers for myself. Ran them for three months each pair. Both since sold.

In my room, the Revel had an unhelpful bass hump, which gave a big first impression but needed cutting for a balanced overall sound. They also had a slightly hard upper mid, making (for example) soprano voices sound glassy. By contrast the KEFs worked beautifully and I could have lived with them happily. Used in a typical British living room (smaller than US!) approx 20’ x 11’ x 8’6” with carpets and curtains/drapes.

My comparison is with long-term owned ATC SCM40, three-way floorstanders, and Sonus faber Guarneri Memento, two-way standmounts.
 
View attachment 470578

What is Revel's theory of optimal tweeter height?
It is funny you should show that difference. The Salon2 tweeter is another 2-3” higher than the 328Be tweeter.I think that is the reason my Salon2s didn’t work as well in my room (18x16x8). The Blade2 metas tweeter height is similar to other speaker you show. I think the Salons just really just need a much larger room and I have little doubt that if you had a larger room, the Salons would work very well.
 
I haven't listened to both myself, but this AI summary is a good example of what's wrong with using LLMs. KEF strives for a neutral response arguably just as much as Revel and the Uni-Q dispersion is typically on the narrow side, not wide!

As usual, it's something that sounds right, but with significant flaws and misleading bits. It also makes invalid comparisons by implication, like trying to contrast matching the Harman curve with dispersion characteristics. What does it mean by that? Nobody knows, least of all the AI.

Not just in this context, but I really think LLMs have laid bare that people barely read what's in front of them, and barely understand what they do read. The LLMs output always sounds solid, but often says very little, many times it makes no sense at all if you think about it, but people keep cranking this stuff out as if it was useful.
Looks like Chat GPT won't be replacing Amir, or any other of our true expert reviewers, anytime soon.
 
Looks like Chat GPT won't be replacing Amir, or any other of our true expert reviewers, anytime soon.
To hear Altman tell it, it's going to replace everyone any day now... "PhD Level" he says. I'm not seeing it, frankly. At least he admits it's a bubble. But if this were true we'd be hearing lots of stories about radically new business successes happening all over the place. But what I actually hear is engineers saying LLMs are a handy autocomplete but not exactly perfect.

Maybe... just maybe, the ability to imperfectly regurgitate information is not the limiting factor on productivity in today's market.

Other than that, I hear people are using it to avoid having to know stuff, which Google was already pretty good for...

/rant
 
To hear Altman tell it, it's going to replace everyone any day now... "PhD Level" he says. I'm not seeing it, frankly. At least he admits it's a bubble. But if this were true we'd be hearing lots of stories about radically new business successes happening all over the place. But what I actually hear is engineers saying LLMs are a handy autocomplete but not exactly perfect.

Maybe... just maybe, the ability to imperfectly regurgitate information is not the limiting factor on productivity in today's market.

Other than that, I hear people are using it to avoid having to know stuff, which Google was already pretty good for...

/rant
Altman is primarily a salesman, some even consider him a grifter. If you look at the financials of OpenAI (or any other AI company or division), its very obvious that the only way to not go bankrupt within months is to keep billions of venture capital flowing in and that requires continuously promising the moon.
 
...Maybe... just maybe, the ability to imperfectly regurgitate information is not the limiting factor on productivity in today's market.

Other than that, I hear people are using it to avoid having to know stuff, which Google was already pretty good for...
Imperfectly regurgitating information is what people are using AI for on forums like ASR. This is not how AI is being used to increase productivity in large global companies.
 
This is not how AI is being used to increase productivity in large global companies.
I agree it's not increasing productivity, but to some extent that is how it's being used, unfortunately I speak from direct experience. And, wouldn't it be fair to say that's the only way it's used? That's how LLMs work, after all.
 
And, wouldn't it be fair to say that's the only way it's used? That's how LLMs work, after all.
The use case where it seems to perform best is translation of language. That includes translation from human language into programming languages. There is little objective evidence that it is of benefit to accomplished software engineers but very useful for the many jobs which require some limited programming by people who are not IT specialists.
 
I agree it's not increasing productivity, but to some extent that is how it's being used...
Well, that is how it is being used on forums like ASR.

...And, wouldn't it be fair to say that's the only way it's used?
No. It recognizes language and can analyze effectively how that language is used...it does not have to then regurgitate a summary.
One example...AI can analyze thousands of invoices or POs, even in PDF format, and either summarize them or check them for errors or inaccuracies (based on its language learning foundation). It can do what would take a human being dozens of hours in just a few minutes - and have a lower error rate. Because it recognizes language patterns, it can do things that simple search functions could not do...i.e., before AI this had to be done by human beings. It can so similar things with very large excel sheets, databases, etc. Consider just this one example and how much time it could save a company like Wal-Mart.
 
...There is little objective evidence that it is of benefit to accomplished software engineers but very useful for the many jobs which require some limited programming by people who are not IT specialists.
My son has a computer science degree and works with App development and testing. He said AI can create code in 30 minutes that would take him days. The human's responsibility now is to quality/stress test the code initially created with AI's assistance.
 
Further down the pecking order, I have owned both KEF LS50 (pre-Meta) and Revel Concerta2 M16. Both were typically sold here in UK at around £800-999/pr. I bought both pairs used (but nearly new) to try two of the most recommended speakers for myself. Ran them for three months each pair. Both since sold.

In my room, the Revel had an unhelpful bass hump, which gave a big first impression but needed cutting for a balanced overall sound. They also had a slightly hard upper mid, making (for example) soprano voices sound glassy. By contrast the KEFs worked beautifully and I could have lived with them happily. Used in a typical British living room (smaller than US!) approx 20’ x 11’ x 8’6” with carpets and curtains/drapes.

My comparison is with long-term owned ATC SCM40, three-way floorstanders, and Sonus faber Guarneri Memento, two-way standmounts.

My experience is somewhere in the middle: KEF R3 and R3 Meta vs. Revel M16 and Revel F36 respectively. They're all set up in individual systems, but I have moved pairs together in order to do a rough comparison. No double blind AxB, no REW measurements, just your humble narrator and his ancient ears.

I like them all, they all have their strengths and weaknesses. I'm not selling any of them, so that should tell you something.

The M16s are my desktop daily drivers. Very clean nearfield listening with excellent imaging.

I feel like the F36 have the best imaging, but they also sit in the same space as my home theater system so they have the largest listening triangle, maybe 16 ft on a side.

I've never experienced any sibilance in either pair of R3s. Both image very well, maybe the Metas are a tad bit broader, with the image clearly outside the speakers. They're in a tighter location where the triangle is 10 ft on a side and more isosceles than equilateral. (speakers are ~9 ft apart)

The regular R3s are in a room with huge volume (22 ft ceiling). Another isosceles with longest sides about 12 ft from the speakers and the speakers are about 10 ft apart. I love this setup.

All systems have a sub (2.1), some are EQ'd via miniDSP, one I'm using the WiiM EQ. They all sound different, but great to me. I use each for a different type of listening vibe.
 
Last edited:
My son has a computer science degree and works with App development and testing. He said AI can create code in 30 minutes that would take him days. The human's responsibility now is to quality/stress test the code initially created with AI's assistance.
Thats not wrong but doesn't contradict the evidence at all. Testing, debugging and quality controlling code written by someone else (the LLM in this case) is the most time consuming, frustrating and error prone activity imaginable in computer science. Not sure about your son, but if i tell our software engineers that in the future, they don't have to to the initial coding anymore but instead spend all their time testing and debugging someone elses code, they will jump out of the window.
 
It can do what would take a human being dozens of hours in just a few minutes - and have a lower error rate.
I haven't seen LLMs do this. The error rate is insidiously high in my experience.

Because it recognizes language patterns, it can do things that simple search functions could not do...i.e., before AI this had to be done by human beings. It can so similar things with very large excel sheets, databases, etc. Consider just this one example and how much time it could save a company like Wal-Mart.
I'm not saying these things are useless - I use them myself to generate code to perform larger analyses - but I haven't seen them actually pull anything useful out of large bodies of text or data like that. I expect to try soon, though. I understand they've gotten better, but a couple years ago they were (way) worse than useless for that kind of thing.

But still, let's keep in mind that the state of the art LLMs can't even count. What kind of serious analysis can you do with no actual sense of proportionality / scale of trends in the data?

Your response reminds me more of the marketing of LLMs than the reality. Yes, they could be a huge boon for large orgs like Wal-mart if they really could do that kind of large-scale analysis for real, but what I've personally seen is a lot of people treating "looks like a proper analysis" as "proper analysis" and I'm getting really concerned that corporate America could never actually tell the difference to begin with.
 
Last edited:
Thats not wrong but doesn't contradict the evidence at all. Testing, debugging and quality controlling code written by someone else (the LLM in this case) is the most time consuming, frustrating and error prone activity imaginable in computer science. Not sure about your son, but if i tell our software engineers that in the future, they don't have to to the initial coding anymore but instead spend all their time testing and debugging someone elses code, they will jump out of the window.
It reminds me of the old shop signs / memes, but we need to add a line item "LLM designs everything, I fix it" at the end...

1755621975157.png
 
what I've personally seen is a lot of people treating "looks like a proper analysis" as "proper analysis" and I'm getting really concerned that corporate America could never actually tell the difference to begin with.
I think you should stay concerned. Particularly in disciplines where poor analysis gets spun away by adroit organizational politics. I’ve found some of the most effective leaders in business have by far a) the worst analysis skills and b) a heaping helping of confirmation bias.
 
I think you should stay concerned. Particularly in disciplines where poor analysis gets spun away by adroit organizational politics. I’ve found some of the most effective leaders in business have by far a) the worst analysis skills and b) a heaping helping of confirmation bias.
Unfortunately my experience does not contradict this. I just don't quite get it... how can someone just not care that they're going to be wrong a significant percentage of the time when there's money on the line? Or like, at all?

Would you let ChatGPT play Blackjack or poker for you? No? Then why let it write analysis for you? It's the same thing with more steps...
 
Back
Top Bottom