• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Genelec 8361A Review (Powered Monitor)

Rate this speaker:

  • 1. Poor (headless panther)

    Votes: 9 1.2%
  • 2. Not terrible (postman panther)

    Votes: 4 0.5%
  • 3. Fine (happy panther)

    Votes: 36 4.7%
  • 4. Great (golfing panther)

    Votes: 711 93.6%

  • Total voters
    760
I mean this with due respect but it seems you do not understand the basics of the science at work here and the fundamental question Dr. Toole was working to achieve. The goal was to find a true tool to find accurate monitors regardless of peoples preferences.
No, it wasn't. The methodology was entirely based on people's preferences.

And if you want to critique the science of it all, consider this entirely valid speculation: per Toole himself, an elite sub-group of listeners - pro engineers who compare live sound with loudspeakers all day every day as their job - often found all the tested speakers inadequate, to the point where, when pressed for a preference (and with no "none of the above" option available), expressed the hopelessness of their task by jotting down scores that later appeared inconsistent and random.

But this didn't fit with subliminal expectation, so an explanation had to be found ... wait ... I know! They've got hearing damage! You'll note that despite three editions of the book, this remains a mere assertion. No evidence was offered. No data. No audiograms. Which, per ASR's professed standards, is mere handwaving. It's in the same category as "oh, your system isn't resolving enough."

Personally I have no definite conclusions at all, other than believing it's always a good idea to reexamine things from time to time, to reexamine assumptions, constantly to probe and question, to be sure we're heading in the right direction. Certainly I believe that's better than slavish idolatry. (Which I bet Toole is embarrassed by, honestly.)
 
Last edited:
That studio monitors can make fantastic home speakers is one of the great objective audiophile insights. Consequently, the idea that the best studio monitor might not also be the best available speaker for home listening is an attack upon this stance making it unsurprising that people are lashing out.

I think reality is somewhere in between, and the idea of a best speaker comes with a whole litany of qualifiers….
The best for 2 channel, multi channel, treated rooms, live rooms, without a sub, with 2 subs, for classical music, for home theater and dialogue, etc, etc.

The 8361A is an epic speaker. Judging if it is the “best” is a fools errand
 
I could imagine large-scaled adoption of multi-channel hi-fi in a typical domestic environment if loudspeakers were as thin as framed pictures and could be strategically placed on walls, in order to blend in. But trying to seamlessly integrate six or seven or eight loudspeakers in a domestic setting is too much to ask for most people. Both expense and feng shui.

I think that is one reason for the interest in smaller, self-powered two-way shoebox loudspeakers. Much easier to deal with than large floor standers, especially in a multi-speaker environment. Not that the 8361 is a small two-way shoebox. But it is smaller than many other loudspeakers, at its price point.

For domestic integration the company could do something about its rather industrial appearance. That would increase the cost, no doubt. But if you can afford five or more of these for your listening space, you can likely afford a cosmetic upgrade.
Don't forget not everyone have the space and cable organization in concrete apartments to make a full multi channel work, while majority of music enjoyment are ppl with speakers playing and waking around the house, so the benefit of multi channel imaging will goes away once in those situation. and one more thing is the exponential increase of cost of multichannel, in a 5.1 I would need to spend less than half per speaker to get the cost in control, and such you are massively downgrading the speakers itself
 
I am an old bloke with a lot of recordings.
99.9% are in stereo so I have decided not to bother much with surround, I have 3 Meridian M33 actives as centre and rear channels (the rears are located where convenient too, not optimally) for watching the odd film.

I know accurate location is impossible but with lots of (mainly older simply miked) recordings I get a bit of recording venue acoustics as opposed to location information, which I like.

So I am pretty well only interested by speakers for stereo nowadays, however much better multi channel may potentially be.

The 8361 will definitely be on my short list to audition if I ever change speakers.
 
And if you want to critique the science of it all, consider this entirely valid speculation: per Toole himself, an elite sub-group of listeners - pro engineers who compare live sound with loudspeakers all day every day as their job - often found all the tested speakers inadequate, to the point where, when pressed for a preference (and with no "none of the above" option available), expressed the hopelessness of their task by jotting down scores that later appeared inconsistent and random.

But this didn't fit with subliminal expectation, so an explanation had to be found ... wait ... I know! They've got hearing damage! You'll note that despite three editions of the book, this remains a mere assertion. No evidence was offered. No data. No audiograms. Which, per ASR's professed standards, is mere handwaving. It's in the same category as "oh, your system isn't resolving enough."
" no audiograms"? What have you (not) been reading? The books are naturally summary presentations of my and other researchers' work, but even they show data related to the audiograms that were measured on all of the participants in those experiments. BTW, audiograms have been routine screening for regular listeners in tests since those early tests in 1985.
Figure 3.5 in the 3rd edition shows summary relationships between hearing loss in different frequency ranges and judgement variability, and Figure 3.6 shows actual audiometric measurements on a number of listeners exhibiting high variability in their judgments. These listeners were clearly not exhibiting randomness in their judgements because of frustration and hopelessness over what they were hearing. They simply were not hearing all of the sound, and because of that made mistakes.

For more data and analysis you must go back to the original 1985 publication, where I would like to think there is much more than your asserted "handwaving". Toole, F. E. (1985). “Subjective measurements of loudspeaker sound quality and listener preferences”, J. Audio Eng. Soc., 33. pp. 2-31. It still surprises me that this was 36 years ago - a lifetime - yet it is still not well understood. It was not the last word; more like the "first word" on the topic that I am aware of in the context of audio. More work needed to be done, but it raised a flag.

For an independent view on the role of hearing performance and the results of listening tests, there is the 2006 book: "Perceptual Audio Evaluation" by Bech and Zacharov, Wiley. Section 5.4.3 addresses "Subject selection", and part of the screening is to exhibit hearing thresholds in both ears that are within 15 dB of otologically normal persons. They comment that "approximately 50% of a university's student population (male and female) will not pass an audiometric test using a 15 dB rule". What have they been listening to?

Chapter 17 in the 3rd edition describes the criteria applied by OSHA and NIOSH for their occupational hearing conservation programs. These are often thought to prevent hearing loss, but they don't. They allow hearing loss to accumulate over a working lifetime, aiming to preserve enough that at the end of a career one can carry on a conversation at 1 m distance in the quiet. In audiometric terms that translates into about 25 dB threshold elevation at audiometric frequencies 1, 2 and 3kHz, resulting in an estimated 10% loss of understanding of entire sentences and a 50% misunderstanding of "PB" words. HiFi hearing is long gone.

Chapter 17 also describes something relatively new: Hidden hearing loss. It affects the binaural directional/spatial hearing system, independently of audiometric threshold elevations.

Hearing is fragile - preserve it!
 
The methodology was entirely based on people's preferences.
Indeed it was. At the time - and still now - a lot of people thought that everyone was an "individual" with distinctive likes and dislikes, as in music itself, or food, or wine, and on and on. BUT, the surprise, starting with my first blind listening test in 1966 (55 years ago), was that there was broad agreement among "individuals" as to which sounds were preferred and which were not. This is documented with measurements in Section 18.1 in the 3rd edition. The observation that the most preferred loudspeakers exhibited the flattest, smoothest measured frequency responses was what set me off on decades of research. The fact that at that time virtually all loudspeakers exhibited audible colorations made it easy to identify the technical measurements and quantities that correlated with improved sound quality. Nowadays, it would be more difficult, as measurement technology has much improved and there is increasing agreement about the design objectives for loudspeakers of all kinds :domestic, studio monitor, automobile and cinema. The more they aim at the same target the more uniform will be our listening experiences. Sean Olive's recent research into headphone audio has shown persuasively that the most preferred headphones in double-blind tests are those that closely mimic the performance of good loudspeakers in typical rooms. Surprise, surprise.

As I have said repeatedly, "perfect" loudspeakers in "perfect" rooms cannot deliver "perfect" sound from all recordings - they are a variable. That is what tone controls and EQ are for, and for fussy listeners they need to be accessible for adjustment in real time, not embedded into the system at "calibration" time.

It turns out that listeners in general are able to recognize timbral colorations, resonances in particular, as being "foreign" to a preferred sound. The "best" loudspeakers were, from that perspective, the least bad loudspeakers. It seems that these colorations are distractions from the recorded sounds and eliminating them caused loudspeakers to be "more preferred". So, as much as people like to think that human preferences are somehow less than scientifically objective, it turns out that humans exhibiting their preferences in double-blind tests have provided the guidance to build more technically "neutral" loudspeakers. When the measurements look better, the preference scores go up. In my opinion that is a good place to be. With rare exceptions, electronics have been there all along.
 
Curious, what monitors did Canadian Broadcasting Corporation end up selecting after your research?
The winning small/medium monitors were Canadian products: PSB and Energy (consumer products) and the medium/large monitors were JBL - all are long obsolete. But once that phase was completed the CBC realized that once they had access to comprehensive and accurate measurements (the precursor of the spinorama that I created around 1983), they could include other options in the future. It removed a lot of the mystery and folklore surrounding monitor speakers - the only feature that truly separates them from mainstream consumer products is reliability under severe distress (e.g. a dropped mic) - dead air is a no-no.

In terms of sound quality a couple of the engineers at the end of the exercise admitted that they had never heard such good sound before. Compared to the crummy monitors they had been using I believe them. A couple of others could not believe that they criticized some of their favorites and insisted on a rematch using their own master tapes. The results did not change. It was a learning experience for all of us who were involved. It required anechoic measurements and a well designed listening room, ($$$$), and that is what makes such elaborate tests difficult to do, even with the best of intentions. In this case the Canadian government paid the bill as both the National Research Council and the CBC were federally sponsored organizations.
 
the only feature that truly separates them from mainstream consumer products is reliability under severe distress
See, this is why I think brands with a rep for being damn near indestructible still have such a large place in studio environments, despite somewhat inferior objective behavior. Where I work, we have monitors from ATC, Griffin, Genelec, Barefoot, Focal, and Dynaudio around in various places - plus the ubiquitous NS10 grotboxes. Every single brand except for ATC and Genelec have had multiple failures over the last couple of months - and the ATC rooms get run a lot harder than the Genelec rooms. Are they as near-on perfect as the good Genelecs? Nah. Are they impossible to kill? Yes.
 
See, this is why I think brands with a rep for being damn near indestructible still have such a large place in studio environments, despite somewhat inferior objective behavior. Where I work, we have monitors from ATC, Griffin, Genelec, Barefoot, Focal, and Dynaudio around in various places - plus the ubiquitous NS10 grotboxes. Every single brand except for ATC and Genelec have had multiple failures over the last couple of months - and the ATC rooms get run a lot harder than the Genelec rooms. Are they as near-on perfect as the good Genelecs? Nah. Are they impossible to kill? Yes.
Anecdotal, but interesting. What were the failures, if you don't mind?
 
Anecdotal, but interesting. What were the failures, if you don't mind?
Mostly amp failures, though I don't work in the tech department so I'm not privy to what exactly failed. Barefoot and Focal are the worst offenders. I don't know what it is about the amps in them that fail so hard, especially considering these are 1st gen Barefoot MM27s with I believe Bryston(?) ICEPower plate amps. Focal's BASH amps in their more expensive speakers seem to be particularly unreliable too. Next time I'm at work, I'll pull up the problem log and see if the techs left any notes.
 
Last edited:
The winning small/medium monitors were Canadian products: PSB and Energy (consumer products) and the medium/large monitors were JBL - all are long obsolete. But once that phase was completed the CBC realized that once they had access to comprehensive and accurate measurements (the precursor of the spinorama that I created around 1983), they could include other options in the future. It removed a lot of the mystery and folklore surrounding monitor speakers - the only feature that truly separates them from mainstream consumer products is reliability under severe distress (e.g. a dropped mic) - dead air is a no-no.

In terms of sound quality a couple of the engineers at the end of the exercise admitted that they had never heard such good sound before. Compared to the crummy monitors they had been using I believe them. A couple of others could not believe that they criticized some of their favorites and insisted on a rematch using their own master tapes. The results did not change. It was a learning experience for all of us who were involved. It required anechoic measurements and a well designed listening room, ($$$$), and that is what makes such elaborate tests difficult to do, even with the best of intentions. In this case the Canadian government paid the bill as both the National Research Council and the CBC were federally sponsored organizations.
Somehow I misread as PMC... as I always remember them being praised
 
Mostly amp failures, though I don't work in the tech department so I'm not privy to what exactly failed. Barefoot and Focal are the worst offenders. I don't know what it is about the amps in them that fail so hard, especially considering these are 1st gen Barefoot MM27s with I believe Bryston(?) ICEPower plate amps. Focal's BASH amps in their more expensive speakers seem to be particularly unreliable too. Next time I'm at work, I'll pull up the problem log and see if the techs left any notes.
so luckily I chose Genelec rahter than the Focal Shape which I love their design more...

seems amps are the most usual suspects, so for ATC and Genelec which you said are the only lucky ones, can you share about the approximate serviced life till now and did they completely runs strong without any failure or say there are a few died but generally is indestructible?
 
so luckily I chose Genelec rahter than the Focal Shape which I love their design more...

seems amps are the most usual suspects, so for ATC and Genelec which you said are the only lucky ones, can you share about the approximate serviced life till now and did they completely runs strong without any failure or say there are a few died but generally is indestructible?
8 years on the ATCs in one building without a failure to my understanding (that's 7 pairs operated basically 24/7 45 of 52 weeks of the year).

Far as I'm aware the Genelecs have yet to have a failure either, though that's only 4 pairs of the same age.

As for the Shapes, they use class AB chip amps that are IIRC the same TDA7293 that Neumann uses, and Neumann failures are also rare.
 
Last edited:
  • Like
Reactions: YSC
8 years on the ATCs in one building without a failure to my understanding (that's 7 pairs operated basically 24/7 45 of 52 weeks of the year).

Far as I'm aware the Genelecs have yet to have a failure either, though that's only 4 pairs of the same age.

As for the Shapes, they use class AB chip amps that are IIRC the same TDA7293 that Neumann uses, and Neumann failures are also rare.
Right, I was concerned that focal don’t get it with a heatsink for the class ab amp and let the rear panel runs hot, as here in my location it have really hot summer so I don’t want to risk, of course the dispersion pattern and compensation presets of genelec did won my final choice though
 
8 years on the ATCs in one building without a failure to my understanding (that's 7 pairs operated basically 24/7 45 of 52 weeks of the year).

Far as I'm aware the Genelecs have yet to have a failure either, though that's only 4 pairs of the same age.

As for the Shapes, they use class AB chip amps that are IIRC the same TDA7293 that Neumann uses, and Neumann failures are also rare.
Any Focal Solo 6 failures?
 
8 years on the ATCs in one building without a failure to my understanding (that's 7 pairs operated basically 24/7 45 of 52 weeks of the year).

Far as I'm aware the Genelecs have yet to have a failure either, though that's only 4 pairs of the same age.

As for the Shapes, they use class AB chip amps that are IIRC the same TDA7293 that Neumann uses, and Neumann failures are also rare.
one more thing I am curious, for consumers we likely won't measure and calibrate the speakers once setup unless things are moved around, but for studios do you calibrate them once a while? if so did you notice some aging effects for that matter?

8 years of 24/7 operation is basically equates to wear and tear in normal listening level in home situation of... forever if I understand correctly, caps and electronics should age much faster that way than typical home use?
 
Mostly amp failures, though I don't work in the tech department so I'm not privy to what exactly failed. Barefoot and Focal are the worst offenders. I don't know what it is about the amps in them that fail so hard, especially considering these are 1st gen Barefoot MM27s with I believe Bryston(?) ICEPower plate amps. Focal's BASH amps in their more expensive speakers seem to be particularly unreliable too. Next time I'm at work, I'll pull up the problem log and see if the techs left any notes.
I was surprised that you didn't exclude the passive NS-10 from failures, so what did fail at them?
 
Could you please post a link?
 
Back
Top Bottom