It would be nice if these were displayed horizontally side-by-side instead of vertically. Also, when they are displayed horizontally like with the vertical contour normalized 3D image comparison, they do not even fit on the page of my 17" laptop screen. The right image is 50% cut off. It would be nice if these were smaller separate images that you could click and move the image up/down, right and left for best fit on the page. Thanks.Is there a way to extend the image to occupy a larger part of the screen (screenshot in Firefox).
![]()
I looked into spinorama's repository. I figured out that you have to install ImageMagick, run env.sh to create virtual environment, activate it with .env/bin/activate and then execute scripts/setup.py, then update_website.sh which seems to orchestrate the construction of everything. Output data in dist/ looks like it will be the entire spinorama site, though that phase has not yet completed.
I had to change this, the first diff to just normalize running directory to within the repository, the second to correct the assumption that the git checkout is at ~srg/spinorama which was not true in my case. This broke the ray program who expects to find /tmp/ray at the right place.
--- a/update_website.sh
+++ b/update_website.sh
@@ -17,6 +17,7 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
echo "Update starts"
+cd "$(dirname "$0")"
mkdir -p build/website
export PYTHONPATH=src:src/website:src/spinorama:.
@@ -59,7 +60,7 @@ fi
# generate all graphs if some are missing
mkdir -p build/ray
-rm -fr /tmp/ray && ln -s ~/src/spinorama/build/ray /tmp
+rm -fr /tmp/ray && ln -s "$PWD/build/ray" /tmp
command=$(python3 ./generate_graphs.py --dash-ip="$IP")
status=$?
if [ $status -ne 0 ]; then
@@ -193,7 +194,7 @@ fi
# echo "Quarto is not available, skipping HTML manual!"
#fi
-command=$(workbox generateSW workbox-config.js)
+command=$(node_modules/.bin/workbox generateSW workbox-config.js)
status=$?
if [ $status -ne 0 ]; then
echo "KO after generateSWL!"
After this, it shouldn't matter which directory you start the process from and the /tmp/ray symlink will point into the right location.
Do you think if it would be feasible to perform the entire graph rendering just on client side, though, directly from the measurement data? I can see that a very large part of the build process involves just drawing graphs into static pictures using this thing called ray. The spinorama dist tree becomes eventually about 20 GB large because of these numerous pictures, whereas the raw measurement data is only about 1 GB. Thus, there is clearly a huge savings opportunity here if these pictures were drawn on demand by the browser, instead. There is definitely enough computing power on the client side for this.
plotly does all of that well (both in python and JS)I imagine the process would be mostly about reading the raw txt files using fetch, parsing the data for things like frequency, magnitude and phase (and in case of the dispersion plots, it involves 2d point plot and some color interpolation would be good), and emitting SVG elements to move the render work to the client side. I think all work might be doable from the measurement txt files without any further preprocessing. I think it is true to claim that SVG can't easily do simple 2d linear color interpolation. I think two correctly placed gradients can emulate linear interpolation for a triangle with a right angle. One gradient runs vertically and another horizontally from the right angle corner, blending in proper color towards the other two corners. I think this should work.
The equalized pictures could be done by reading the IIR autoeq txt files, calculating the biquad parameters of the filters described inside and then evaluating the the transfer function at each required frequency, which yields the change in magnitude and phase shift caused by the filter cascade. The formulas for these classic DSP cookbook filters HPQ, LPQ, BP, HS, LS, PK and NO (notch) are readily available on the internet. This would allow doing the eq part dynamically as well.
What do you think about this approach? Would it be worth pursuing?
Edit: workbox didn't execute because a dependency's typos.json file can't be found despite it's right there, and check_html wants asynckit which can't find iterate.js, whatever that is. I guess the packages are somehow broken. That being said, I might have the complete build at least assuming these don't update the code somehow.
5 years ago, smartphones were too slow to generate the graphs on the fly. on a current desktop or a recent smartphone, you can. It does generate a fair amount of cpu which may be an issue on the phone. In the compare view, everything is generated on the fly and it works well. Maybe time to rethink the approach.
plotly does all of that well (both in python and JS)
that's also implemented in src/website/plot.js
The only part which is cpu intensive are eq computations (at least how it is currently done). All the rendering could be done live in browser.
If you want to send PR please use the developer branch.
P.
I experimented with a vue application which I set up on https://alankila.github.io/spinorama-html/
This is a bare bones parser that can handle exactly the ASR & EAC "SPL Horizontal/Vertical.txt" files and constructs the various graphs from these two files, with a TypeScript port of the cea2034 code. At the moment, if the measurement is called vendor or something like that, it's not worth clicking that. Only about 50 % of measurements are working right now.
Based on investigating the files, I think I should be able to support most of these various vendor format files, which would take support to about 75 %, I'm guesstimating. There are some difficult-looking measurement formats which will probably never work in this implementation.
To test perfromance, I implemented realtime resize so that if browser window size changes, the SVGs are recomputed on the fly. It seems like the processing is pretty speedy, at least for me the graphs seem to refresh for their new size near realtime.
Yes, I happened to built this on top of d3, though.
The expensive part is to compute the EQ. When you have it, applying it is easy as you explained.I'm going to look into the eq part soon. I'll see if I can lift this part. Given that filter has to be only evaluated for some dozens of frequencies and it works like a fixed numeric offset that can be applied to magnitude of that frequency, I don't expect any major problems with speed.
It is a good question if something like a client side single page application like I'm doing with Vue 3 is the even a direction you'd like to take Spinorama towards? I mean, I did this because I am familiar with the framework and it's quite straightforward and bare bones to use (as long as you as the programmer rerfrain from overcomplicating things).
I am maybe committing to maintaining this alternative version of Spinorama at least for a while, likely develop at least some of the presently missing features like the EQ comparison functions, equalization graphing, front page search, etc. This is not a competition, nor an attempt to replace the site, but more a proof of concept from my part to see what the practical performance can be.
Thanks. I'll also signal that "sort by" date/score... doesn't work on Firefox either.that's a bug; will look into fixing it.
It’s all the $hit like ^this^ hich has my S/W coworkers at odds with me.I looked into spinorama's repository. I figured out that you have to install ImageMagick, run env.sh to create virtual environment, activate it with .env/bin/activate and then execute scripts/setup.py, then update_website.sh which seems to orchestrate the construction of everything. Output data in dist/ looks like it will be the entire spinorama site, though that phase has not yet completed.
I had to change this, the first diff to just normalize running directory to within the repository, the second to correct the assumption that the git checkout is at ~srg/spinorama which was not true in my case. This broke the ray program who expects to find /tmp/ray at the right place.
--- a/update_website.sh
+++ b/update_website.sh
@@ -17,6 +17,7 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
echo "Update starts"
+cd "$(dirname "$0")"
mkdir -p build/website
export PYTHONPATH=src:src/website:src/spinorama:.
@@ -59,7 +60,7 @@ fi
# generate all graphs if some are missing
mkdir -p build/ray
-rm -fr /tmp/ray && ln -s ~/src/spinorama/build/ray /tmp
+rm -fr /tmp/ray && ln -s "$PWD/build/ray" /tmp
command=$(python3 ./generate_graphs.py --dash-ip="$IP")
status=$?
if [ $status -ne 0 ]; then
@@ -193,7 +194,7 @@ fi
# echo "Quarto is not available, skipping HTML manual!"
#fi
-command=$(workbox generateSW workbox-config.js)
+command=$(node_modules/.bin/workbox generateSW workbox-config.js)
status=$?
if [ $status -ne 0 ]; then
echo "KO after generateSWL!"
After this, it shouldn't matter which directory you start the process from and the /tmp/ray symlink will point into the right location.
Do you think if it would be feasible to perform the entire graph rendering just on client side, though, directly from the measurement data? I can see that a very large part of the build process involves just drawing graphs into static pictures using this thing called ray. The spinorama dist tree becomes eventually about 20 GB large because of these numerous pictures, whereas the raw measurement data is only about 1 GB. Thus, there is clearly a huge savings opportunity here if these pictures were drawn on demand by the browser, instead. There is definitely enough computing power on the client side for this.
I imagine the process would be mostly about reading the raw txt files using fetch, parsing the data for things like frequency, magnitude and phase (and in case of the dispersion plots, it involves 2d point plot and some color interpolation would be good), and emitting SVG elements to move the render work to the client side. I think all work might be doable from the measurement txt files without any further preprocessing. I think it is true to claim that SVG can't easily do simple 2d linear color interpolation. I think two correctly placed gradients can emulate linear interpolation for a triangle with a right angle. One gradient runs vertically and another horizontally from the right angle corner, blending in proper color towards the other two corners. I think this should work.
The equalized pictures could be done by reading the IIR autoeq txt files, calculating the biquad parameters of the filters described inside and then evaluating the the transfer function at each required frequency, which yields the change in magnitude and phase shift caused by the filter cascade. The formulas for these classic DSP cookbook filters HPQ, LPQ, BP, HS, LS, PK and NO (notch) are readily available on the internet. This would allow doing the eq part dynamically as well.
What do you think about this approach? Would it be worth pursuing?
Edit: workbox didn't execute because a dependency's typos.json file can't be found despite it's right there, and check_html wants asynckit which can't find iterate.js, whatever that is. I guess the packages are somehow broken. That being said, I might have the complete build at least assuming these don't update the code somehow.
Is there a way to extend the image to occupy a larger part of the screen (screenshot in Firefox).
![]()
both should be mostly fixed. you can check on the dev site:It would be nice if these were displayed horizontally side-by-side instead of vertically. Also, when they are displayed horizontally like with the vertical contour normalized 3D image comparison, they do not even fit on the page of my 17" laptop screen. The right image is 50% cut off. It would be nice if these were smaller separate images that you could click and move the image up/down, right and left for best fit on the page. Thanks.
CJH
# The sound power is the weighted rms average of all 70 measurements,
# with individual measurements weighted according to the portion of the
# spherical surface that they represent. Calculation of the sound power
# curve begins with a conversion from SPL to pressure, a scalar magnitude.
# The individual measures of sound pressure are then weighted according
# to the values shown in Appendix C and an energy average (rms) is
# calculated using the weighted values. The final average is converted
# to SPL.
@pierre have you considered Low Measurement Quality ranking to data suppliers such as ASR and EAC who refuse to deliver on-axis phase response or other timing information with adequate samples? That may not have any (positive) effect to reviewers who decide to censor information and decide unilaterally what is not interesting and not important for the others, but would be a signal that cropping fundamental data is not respectable action.
@pierre (and any others). I hit into a thing I'm not sure is entirely correct in CEA2034 calculation code. I wonder if anyone is capable of enlightening me.
This comment is possibly an excerpt from example source code and data references from the CEA2034-A document which defines Spinorama, which I unfortunately do not have yet. This is the exact array of weights as computed by the implementation -- and likely provided in that Appendix C: [0.030384786186252732, 0.2377652066334539, 0.4501287512133203, 0.6226563048362195, 0.7534600535239484, 0.8478858765301638, 0.9131208534347603, 0.9553831438907388, 0.9790603712118369, 0.9866799194377833] and these are 10 values in total, spread 10 angles apart. The interpretation is that the first is to be used with the horizontal and vertical angles 0 (also called On-Axis by Klippel), 180; the second is for angles 10, 170, -10, -170, third is for 20, 160, -20, -160, etc. and the very last one is for 90 and -90 angles.
But now, analyzing what this means. A data point indicated "On-Axis" has relative importance of 0.030 only in sound power calculation, whereas a data point at 90 degrees off axis has relative importance of 0.987. This indicates that the measurement at 90 degrees off is deemed 32 times more important the measurement on-axis, which is implying that the 90 degrees measurement covers 32 times larger area. If we imagine we are looking at a sphere, the On-Axis measurement would appear to us as a small circular cutout of the spherical surface facing us. But is the +90 degrees vertical measurement actually a strip of sound energy that is 90 degrees away from on-axis sound and includes all sound above the speaker's acoustic axis, and is the -90 degrees sound then the opposing half below the acoustic axis? Together, they form a complete circular segment of the sphere's surface?
Edit: Actually, it would make most sense if we had 90 degree chunks in the various measurements and angles, e.g. vertical measurement takes over at angles -45 to 45 including the 0 degrees direction if we defined that was up; if 90 degrees was right, the horizontal towards right would cover the 45 to 135 degrees, and so forth. I'm starting to think that this is exactly how the data should be interpreted. So each full circular cross-section that forms at an angle against the on-axis sound gets split into 4 pieces, and is stored in 4 separate measurements, but you can stitch them together to create an approximation of the complete spherical radiation field. I've never seen anyone visualize these spins this way, though.
This statement applies to speaker category having significant magnitude response and directivity errors compared to our current knowledge how they should be. But at some level of excellence, magnitude responses and directivity become just differences without significant role in total performance. In that category other features will become more significant. For example dynamic features such as timing, compression and compression spectrum, whole concept; radiating areas, location of radiators, wavefront, direction-dependent polarity, total directivity etc.We are currently getting the spinorama which get us the most important information.
Yes, we already discussed that by e-mail. I'm just out of tools with reviewers.I have asked @amirm, he declined, fine! That's his time, his money and his measurements.
This statement applies to speaker category having significant magnitude response and directivity errors compared to our current knowledge how they should be. But at some level of excellence, magnitude responses and directivity become just differences without significant role in total performance. In that category other features will become more significant. For example dynamic features such as timing, compression and compression spectrum, whole concept; radiating areas, location of radiators, wavefront, direction-dependent polarity, total directivity etc.
One problem is that review(er)s seem to assume that all speakers are basic crap and there is nothing significant outside spinorama and THD at 94 dBSPL. That would be absolute rubbish. I guess one reason is limited view to whole reality. Investigators may not be aware of products having features which are more crucial than resonances/coloration and spinorama. For example weak and lame speakers with very close to perfect spinorama. Total show stopper without any error in tonality. That can continue to market if both manufactures and reviewers ignore the problem and live the same false dream.
Fortunately previous is not so common (anymore), but it's not adequate reason to be denial and refer to Toole et al. in every turn. I'm sure investigators don't think that science is ready and studies within last 50 year have produced final truth for us and we have to settle with it.
Yes, we already discussed that by e-mail. I'm just out of tools with reviewers.
This could become necessary with 4-way and 3-way+sub. I've never heard other than weak 4-ways with (probably 4th order) minimum-phase crossover. Also 3-way with 8th order XO has been weak and lame. I agree that effect can be subtle especially if listener doesn't know possibilities and what to listen - how full range minimum-phase sounds. Error in dynamics is obvious especially in theory, but studies have been so light that belittling or denial has become a more acceptable argument.you can also optimise for a good gd. impact is limited from my perspectice.
I was talking about thermal and other compressions which change tonal balance much before radical distortion is clearly audible. This is common especially with small or unaerodynamic vents, and compression drivers in horn with metal dome and smallish woofers. This should be measured with music spectrum to reveal more practical behavior.- compression: yes of course but if you stay far from the max your speaker can do, distorsion is too low to be audible.
or super-cardioid bass range.- should we had mid-range woofer to customise directivity/reflections?