• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Bottlehead Crack Headphone Amplifier Kit Review

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,356
Location
Alfred, NY
I'd be quite keen! Generally, I reckon I know where the answer will be too - but I've "known the answer" before a listening test before and been wrong too! :p The unhappy part about testing your beliefs is that you often find out you have been wrong - the happy part is that it gives you a chance to stop being wrong, eh?

Any particular music requests? I have my own test tracks, but everyone is different that way. :cool:
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
555
Likes
1,630
Any particular music requests? I have my own test tracks, but everyone is different that way. :cool:
Following Olive's published AES work in the past decade, the canonical music choices for subjective audio testing must include the Jennifer Warnes' version of Bird on a Wire, and at least one Steely Dan song. :p
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,356
Location
Alfred, NY
Following Olive's published AES work in the past decade, the canonical music choices for subjective audio testing must include the Jennifer Warnes' version of Bird on a Wire, and at least one Steely Dan song. :p

I was afraid of that. Well, I'll try to get through the file prep process quickly, to minimize my exposure.
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,465
Location
Australia
Depends on the science you do. Opinions are science. Opinions itself can be scientific facts.

If they are conclusively verified. Then, are they still opinions?
 

tomtoo

Major Contributor
Joined
Nov 20, 2019
Messages
3,722
Likes
4,822
Location
Germany
If they are conclusively verified. Then, are they still opinions?

The opinion could be that the moon is made out of cheese. It does not matter if its a valid opinion.
My english is not good enough but if a opinion is conclusively verified dont we talk than about knowledge?
 

generic

Member
Joined
Sep 3, 2020
Messages
21
Likes
0
I reckon I know where the answer will be too - but I've "known the answer" before a listening test before and been wrong too! :p

Sigh. You (and earlier replies) have my proposals so completely backward that I don't know where to begin.

A complete and valid test protocol would be 5x or 10x more complex than anything I've seen on ASR. In brief, when human perception is involved one must follow human behavioral protocols in addition to instrumentation protocols (as reflected by ASR content). Take everything you are doing with standard audio charts now, but also consider known human/animal test biases and a century of research on the biology and psychology underling perceptual illusions. To say that something is an illusion is to say that there is a predictable animal characteristic that is not present in charts or of note with any machine method.

The ABX protocol used in audio gets in the direction of human test biases, but not often in a rigorous scientific fashion. Rigor involves using a test lab with very controlled conditions and all kinds of ways to check for unintended effects and biases. These methods have been developed over many decades.

https://www.britannica.com/science/experimental-psychology
https://courses.lumenlearning.com/msstate-waymaker-psychology/chapter/reading-what-is-perception/

Regarding comments about moving the goalposts, I never said there was ANY value in listening to recordings of amps. I've tried a dozen or more ABX comparisons online and they are meaningless. Sometimes the producers even add a voiceover saying "It sounded different in the room." The problem appears to be that recordings insert too many intermediaries between the source (Amp #1 and Transducer #1) before the test participant hears the output. So, one is not hearing the tube amp or the solid state amp or the speaker. They are hearing some interaction of different systems and unknown transformation of the sound through a mic, another amp, and another transducer. I fully agree that you'll generally find nothing through ABX of recordings. It has little value so I ignore it.

The major contribution of research psychology to engineering revolves around the quirks, limitations, and differences between people. Today, this field employs Human Factors Engineers (e.g., aircraft interfaces, vehicle dashboards) and Usability Engineers (e.g., software, mobile devices, etc). There are umpteen surprises and unexpected findings with humans. Designers create products that routinely fail when tested. Then they get fixed. Humans are weird, quirky animals and their perceptions don't always correspond to what can be measured through instrumentation.

Sample illusions (of many others):

The size of the moon: When the moon is near the horizon people perceive it to be much larger than when high in the sky and away from landmarks. However, when measuring per minutes of angle (e.g., instrumentation) the moon is exactly the same size.

https://moon.nasa.gov/news/33/the-moon-illusion/

Ever rising tones: Some tones seem to humans to continue to rise, even though the tone does not actually change. Listen below; see text in the first link.

https://www.illusionsindex.org/i/shepard-scales

What I'm proposing is to consider if and how tube amp patterns of noise and distortion may involve Shepard Tones or some similar characteristic that makes them sound very different to people than to machines. This would account for the difference between the poor measurements and that tube amps somehow "hang together." This is precisely where I got on the bus. Why did the BH Crack get a ho-hum panther instead of a headless panther? This is the full scientific strategy to answering the question. Do not ignore or deny decades of relevant outside evidence.
 

tomtoo

Major Contributor
Joined
Nov 20, 2019
Messages
3,722
Likes
4,822
Location
Germany
"..Why did the BH Crack get a ho-hum panther instead of a headless panther?.."

Couse @amirm is a littel of a basshead and had luckily the right headphone laying around? ;)

Edit says: And it worked well in its parameters. By the way the panther is a subjectivist.
 
Last edited:

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
555
Likes
1,630
Sigh. You (and earlier replies) have my proposals so completely backward that I don't know where to begin.
Your proposal requires a vast amount of resources to probe the properties of a phenomenon that has yet to be well established, is the trouble. If there was no opportunity cost to such work, that'd be one thing, but we live in a finite world with finite means, so the most efficient means to determine if something is worthy of further investigation wins out. If we can spend a moment to save an hour, or spend a dollar to save a twenty, surely it makes sense to quickly investigate whether the phenomenon under study persists once eyes close.

Regarding comments about moving the goalposts, I never said there was ANY value in listening to recordings of amps. I've tried a dozen or more ABX comparisons online and they are meaningless. Sometimes the producers even add a voiceover saying "It sounded different in the room." The problem appears to be that recordings insert too many intermediaries between the source (Amp #1 and Transducer #1) before the test participant hears the output. So, one is not hearing the tube amp or the solid state amp or the speaker. They are hearing some interaction of different systems and unknown transformation of the sound through a mic, another amp, and another transducer. I fully agree that you'll generally find nothing through ABX of recordings. It has little value so I ignore it.
This is your interpretation, but an at least equally - and in fact I would assert markedly more - valid interpretation is that the recordings reveal that the difference was in the eye, rather than the ear, of the beholder once the veil has gone up. It is quite trivial to make an ADC that is vastly more linear than any of the devices we're discussing, and @SIY has an extremely high quality one - there is no chance that his ADI2's inputs are in any way distorting the output of a given tube amplifier, so other than the effect of the speaker load (admittedly, a dynamic interaction, but one which could be replicated with a driven load if that's what we wanted to test - and SIY is talking about gain stages here), you really only functionally have the impact of the DUTs and your playback speaker (assuming you have a decent DAC).


What I'm proposing is to consider if and how tube amp patterns of noise and distortion may involve Shepard Tones or some similar characteristic that makes them sound very different to people than to machines. This would account for the difference between the poor measurements and that tube amps somehow "hang together."
While broadly speaking I'm in support of any investigation of subjective perceptions of audible nonlinearities - although of course, there have been quite a few - there's a fairly important chain to follow here in seeking effects. First, verify that the phenomenon in question is audible - what SIY is proposing with his tube gain stage comparison. Having done so, verify that the audible effect has some property worth being interested in (e.g. what you're proposing, that it might be preferable to some). Then we go looking for why that might be, because that's going to be exponentially harder to do, and there's no sense going to all that trouble when an ABX could have saved us the time to begin with.

Edit: I'll note, if you think that software ABX tests specifically mask the phenomenon in question, that is itself a testable hypothesis - there's plenty of ways to skin a cat for blinding listeners, so if you can show that recorded software ABX doesn't let listeners reliable differentiate, but some other equivalently blind and error free methodology (analog ABX with a switchbox, mayhaps?) does, then hey, you've got an interesting result, take it to the presses peer review.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,356
Location
Alfred, NY
Your proposal ...

...is worthless because he refuses to test anything related to his own questionable claims that are the basis for his fanciful proposals. So really, nothing there is worth taking seriously.

Start with a bad (unproved) assumption, make endless excuses for why that assumption can't be tested, and everything that follows is, not surprisingly, absolutely meaningless.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
555
Likes
1,630
...is worthless because he refuses to test anything related to his own questionable claims that are the basis for his fanciful proposals. So really, nothing there is worth taking seriously.

Start with a bad (unproved) assumption, make endless excuses for why that assumption can't be tested, and everything that follows is, not surprisingly, absolutely meaningless.
I really want to take a charitable line here, but when the charitable interjection is "well, it's less that he's saying it can't be tested so much as that he requires a vastly higher standard of evidence for things which do not align with his subjective experience", that's...not terribly charitable, I suppose.

FWIW, I do think that if he reckons that a recording ABX is unable to show whatever it is that's perceived, that's a claim that could be tested without too much difficulty, and I'd love to see his paper on the subject!
 

generic

Member
Joined
Sep 3, 2020
Messages
21
Likes
0
I'm proposing a test model that is common/routine/normal and has been used every day for close to 100 years in university and corporate research settings. Perception, learning, memory. This is in fact old-hat textbook science. Take it or leave it. Deny it or learn it. Ignore it or embrace it. Science is notoriously inefficient and costly. The EU spent billions of euros and decades building the Large Hardron Collier (CERN), and wasn't sure that it would work. Billions spent on a risky project.

Human Factors is mainstream science:

https://journals.sagepub.com/home/hfs
https://www.hfes.org/about-hfes/what-is-human-factorsergonomics
https://www.verywellmind.com/what-is-human-factors-psychology-2794905

My view is that the endless bickering in audio between the objective and subjective crowds likely follows from the lack of resources to appropriately test relevant phenomena (e.g., documented and predictable human illusions/internal cognitive constructs). Those experiencing illusions sense what they sense and this likely can never be tested with the objective measurement devices now used. The closet thing that comes to mind are functional Magnetic Resonance Imaging tools (fMRI) that scan the brain as people engage in a task, but the magnets would interfere with audio and render it useless. Useful answers might indeed require a shift to another more structured setting. But audio is just a hobby.

Genuine audio science is something very different than reporting standardized measurement summaries. I encourage people to avoid freezing on the technology and tools of a given era when newer computational resources might finally answer some of the weird, ephemeral, and dare I say "euphonic" things with audio. As always I'm not much of a believer in euphonic factors, I just seek scientific explanations that build on 100 years of data from 1,000 universities and corporations.

I worked with a professor once who was testing words that change meaning when the accent changes. The project ran into issues because the team couldn't tell the difference between RECORDED words that were supposed to mean different things. They often mixed up their RECORDINGS and therefore could not apply the correct test conditions. As everyone used and heard these words in normal conversations, no one outside the team believed the findings. ASR and others risk a similar outcome if/when they deny similar and frequently reported subjective experiences. The underlying question for this and all things in human experience is: "What part of the sensory experience is external to the person, and what part does that person construct in their own head? How was it transformed?" This is the essence of research psychology. Your smartphones were developed and refined with research psychology (e.g., Siri, Alexa, and much more).

Sample words that change as the accent changes:

https://jakubmarian.com/english-words-that-change-meaning-depending-on-the-stress-position/
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,356
Location
Alfred, NY
I'm proposing a test model that is common/routine/normal and has been used every day for close to 100 years in university and corporate research settings. Perception, learning, memory. This is in fact old-hat textbook science. Take it or leave it. Deny it or learn it. Ignore it or embrace it. Science is notoriously inefficient and costly. The EU spent billions of euros and decades building the Large Hardron Collier (CERN), and wasn't sure that it would work. Billions spent on a risky project.

Human Factors is mainstream science:

https://journals.sagepub.com/home/hfs
https://www.hfes.org/about-hfes/what-is-human-factorsergonomics
https://www.verywellmind.com/what-is-human-factors-psychology-2794905

My view is that the endless bickering in audio between the objective and subjective crowds likely follows from the lack of resources to appropriately test relevant phenomena (e.g., documented and predictable human illusions/internal cognitive constructs). Those experiencing illusions sense what they sense and this likely can never be tested with the objective measurement devices now used. The closet thing that comes to mind are functional Magnetic Resonance Imaging tools (fMRI) that scan the brain as people engage in a task, but the magnets would interfere with audio and render it useless. Useful answers might indeed require a shift to another more structured setting. But audio is just a hobby.

Genuine audio science is something very different than reporting standardized measurement summaries. I encourage people to avoid freezing on the technology and tools of a given era when newer computational resources might finally answer some of the weird, ephemeral, and dare I say "euphonic" things with audio. As always I'm not much of a believer in euphonic factors, I just seek scientific explanations that build on 100 years of data from 1,000 universities and corporations.

I worked with a professor once who was testing words that change meaning when the accent changes. The project ran into issues because the team couldn't tell the difference between RECORDED words that were supposed to mean different things. They often mixed up their RECORDINGS and therefore could not apply the correct test conditions. As everyone used and heard these words in normal conversations, no one outside the team believed the findings. ASR and others risk a similar outcome if/when they deny similar and frequently reported subjective experiences. The underlying question for this and all things in human experience is: "What part of the sensory experience is external to the person, and what part does that person construct in their own head? How was it transformed?" This is the essence of research psychology. Your smartphones were developed and refined with research psychology (e.g., Siri, Alexa, and much more).

Sample words that change as the accent changes:

https://jakubmarian.com/english-words-that-change-meaning-depending-on-the-stress-position/

Short version: I like to wave my hands and refuse to do basic experiments to validate my starting assumptions.
 

lashto

Major Contributor
Forum Donor
Joined
Mar 8, 2019
Messages
1,045
Likes
535
I was afraid of that. Well, I'll try to get through the file prep process quickly, to minimize my exposure.
Do you plan to add some extra details about the recorded amps or it'll be super-blind SS vs Tube?
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,356
Location
Alfred, NY
Do you plan to add some extra details about the recorded amps or it'll be super-blind SS vs Tube?

I always disclose everything about my test setups.
 

Tom C

Major Contributor
Joined
Jun 16, 2019
Messages
1,513
Likes
1,387
Location
Wisconsin, USA
The opinion could be that the moon is made out of cheese. It does not matter if its a valid opinion.
My english is not good enough but if a opinion is conclusively verified dont we talk than about knowledge?
A subtle truth...
 

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,702
Location
Chicago
I'm proposing a test model that is common/routine/normal and has been used every day for close to 100 years in university and corporate research settings. Perception, learning, memory. This is in fact old-hat textbook science. Take it or leave it. Deny it or learn it. Ignore it or embrace it. Science is notoriously inefficient and costly. The EU spent billions of euros and decades building the Large Hardron Collier (CERN), and wasn't sure that it would work. Billions spent on a risky project.

Human Factors is mainstream science:

https://journals.sagepub.com/home/hfs
https://www.hfes.org/about-hfes/what-is-human-factorsergonomics
https://www.verywellmind.com/what-is-human-factors-psychology-2794905

My view is that the endless bickering in audio between the objective and subjective crowds likely follows from the lack of resources to appropriately test relevant phenomena (e.g., documented and predictable human illusions/internal cognitive constructs). Those experiencing illusions sense what they sense and this likely can never be tested with the objective measurement devices now used. The closet thing that comes to mind are functional Magnetic Resonance Imaging tools (fMRI) that scan the brain as people engage in a task, but the magnets would interfere with audio and render it useless. Useful answers might indeed require a shift to another more structured setting. But audio is just a hobby.

Genuine audio science is something very different than reporting standardized measurement summaries. I encourage people to avoid freezing on the technology and tools of a given era when newer computational resources might finally answer some of the weird, ephemeral, and dare I say "euphonic" things with audio. As always I'm not much of a believer in euphonic factors, I just seek scientific explanations that build on 100 years of data from 1,000 universities and corporations.

I worked with a professor once who was testing words that change meaning when the accent changes. The project ran into issues because the team couldn't tell the difference between RECORDED words that were supposed to mean different things. They often mixed up their RECORDINGS and therefore could not apply the correct test conditions. As everyone used and heard these words in normal conversations, no one outside the team believed the findings. ASR and others risk a similar outcome if/when they deny similar and frequently reported subjective experiences. The underlying question for this and all things in human experience is: "What part of the sensory experience is external to the person, and what part does that person construct in their own head? How was it transformed?" This is the essence of research psychology. Your smartphones were developed and refined with research psychology (e.g., Siri, Alexa, and much more).

Sample words that change as the accent changes:

https://jakubmarian.com/english-words-that-change-meaning-depending-on-the-stress-position/
You seem to be unfamiliar with psychoacoustics and the literature that AES has published on double blind ABX testing. And just because “Human Factors” research has not been published on subjective audio phenomena does not invalidate all of the other empirical research done up to this point. Also, if you honestly think that more sophisticated experiments will solve the disagreements between objectivists and subjectivists, then you are on crack.
 

tomtoo

Major Contributor
Joined
Nov 20, 2019
Messages
3,722
Likes
4,822
Location
Germany
You seem to be unfamiliar with psychoacoustics and the literature that AES has published on double blind ABX testing. And just because “Human Factors” research has not been published on subjective audio phenomena does not invalidate all of the other empirical research done up to this point. Also, if you honestly think that more sophisticated experiments will solve the disagreements between objectivists and subjectivists, then you are on crack.

:)
 

Tom C

Major Contributor
Joined
Jun 16, 2019
Messages
1,513
Likes
1,387
Location
Wisconsin, USA
I tried to be charitable too. I want to stop the spread of ill-informed naive explanations that negatively affect buyers and the audio industry.
If this were a true statement, then you would in fact support ASR and the strong, incomparable collective wealth of information and experience it represents. I am confident you realize the audience here includes PhD’s, MD’s, with extensive formal training, and many deeply experienced in basic and applied research.
Quite frankly, people like Mad Economist, SIY, and many others here think thoughts I’m not capable of, due to training, experience and natural talent.
Your true motive is to try to discredit that which serves to dismantle the profit margins of an antiquated industry. It was great while it lasted, but it’s time to move beyond the past.
 
Top Bottom